Why QAnon is attracting so many followers in Australia — and how it can be countered



SCOTT BARBOUR/AAP

Kaz Ross, University of Tasmania

On September 5, a coalition of online groups are planning an Australia-wide action called the “Day of Freedom”. The organisers claim hundreds of thousands will join them on the streets in defiance of restrictions on group gatherings and mask-wearing mandates.

Some online supporters believe Stage 5 lockdown will be introduced in Melbourne the following week and the “Day of Freedom” is the last chance for Australians to stand up to an increasingly tyrannical government.

The action is the latest in a series of protests in Australia against the government’s COVID-19 restrictions. The main issues brought up during these protests centre around 5G, government surveillance, freedom of movement and, of course, vaccinations.

And one general conspiracy theory now unites these disparate groups — QAnon.




Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans


Why QAnon has exploded in popularity globally

Since its inception in the US in late 2017, QAnon has morphed beyond a specific, unfounded claim about President Donald Trump working with special counsel Robert Mueller to expose a paedophile ring supposedly run by Bill and Hillary Clinton and the “deep state”. Now, it is an all-encompassing world of conspiracies.

QAnon conspiracy theories now include such wild claims as Microsoft founder Bill Gates using coronavirus as a cover to implant microchips in people, to governments erecting 5G towers during lockdown to surveil the population.

Donald Trump has tacitly endorsed QAnon, saying its followers
Leah Millis/Reuters

Last week, Facebook deleted over 790 groups, 100 pages and 1,500 ads tied to QAnon and restricted the accounts of hundreds of other Facebook groups and thousands of Instagram accounts. QAnon-related newsfeed rankings and search results were also downgraded.

Facebook is aiming to reduce the organising ability of the QAnon community, but so far such crackdowns seem to have had little effect on the spread of misinformation.

In July, Twitter removed 7,000 accounts, but the QAnon conspiracy has become even more widespread since then. A series of global “save the children” protests in the last few weeks is proof of how resilient and adaptable the community is.

Why Australians are turning to QAnon in large numbers

QAnon encourages people to look for evidence of conspiracies in the media and in government actions. Looking back over the last several years, we can see a range of events or conspiracy theories that have helped QAnon appeal to increasing numbers of followers in Australia.

1) Conspiracies about global governance

In 2015, Senator Malcolm Roberts claimed the UN’s 1992 “Agenda 21” plan for sustainable development as a foreign global plan aimed at depriving nations of their sovereignty and citizens of their property rights.

The belief that “Agenda 21” is a blueprint for corrupt global governance has become a core tenet of QAnon in Australia.

Any talk of “global bankers and cabals” directly taps into longstanding anti-Semitic conspiracies about supposed Jewish world domination often centred on the figure of billionaire George Soros. The pandemic and QAnon have also proven to be fertile ground for neo-Nazis in Australia.

2) Impact of the far-right social media

QAnon has its roots on the far-right bulletin boards of the websites 4Chan and 8Chan. Other campaigns from the same sources, such as the “It’s OK to be White” motion led by One Nation leader Pauline Hanson in the Senate, have been remarkably successful in Australia, showing our susceptibility to viral trolling efforts.

3) Perceived paedophiles in power

During the Royal Commission into Institutional Responses to Child Abuse, Senator Bill Heffernan tried unsuccessfully to submit the names of 28 prominent Australians which he alleged were paedophiles.

His failure is widely shared in QAnon circles as proof of a cover-up of child abuse at all levels of Australian government. The belief the country is run by a corrupt paedophile cabal is the most fundamental plank of the QAnon platform.

Among the QAnon conspiracy theories in the US is that Hollywood actors have engaged in crimes against children.
CHRISTIAN MONTERROSA/EPA

4) Increasingly ‘unaccountable and incompetent’ governments

A number of recent events have eroded public trust in government — from the “sports rorts affair” to the Witness K case — and all serve to further fuel the QAnon suspicion of authority figures.

5) Longstanding alternative health lobbies

Australia’s sizeable anti-vax movement has found great support in the QAnon community. Fear about mandatory vaccinations is widespread, as is a distrust of “big pharma”.

Also, the continuing roll-out of 5G technology throughout the pandemic has confirmed the belief among QAnon followers that there are ulterior motives for the lockdown. Wellness influencers such as celebrity chef Pete Evans have amplified these messages to their millions of followers.

6) The ‘plandemic’ and weaponising of COVID-19

In the QAnon world, debates about the origin of the coronavirus, death rates, definition of cases, testing protocols and possible treatments are underpinned by a belief that governments are covering up the truth. Many believe the virus isn’t real or deadly, or it was deliberately introduced to hasten government control of populations.

Understanding QAnon followers

Understanding why people become part of these movements is the key to stopping the spread of the QAnon virus. Research into extremist groups shows four elements are important:

1) Real or perceived personal and collective grievances

This year, some of these grievances have been linked directly to the pandemic: government lockdown restrictions, a loss of income, fear about the future and disruption of plans such as travel.

2) Networks and personal ties

Social media has given people the ability to find others with similar grievances or beliefs, to share doubts and concerns and to learn about connecting theories and explanations for what may be troubling them.

3) Political and religious ideologies

QAnon is very hierarchically structured, similar to evangelical Christianity. QAnon followers join a select group of truth seekers who are following the “light” and have a duty to wake up the “sheeple”. Like some religions, the QAnon world is welcoming to all and provides a strong sense of community united by a noble purpose and hope for a better future.

4) Enabling environments and support structures

In the QAnon world, spending many hours on social media is valued as doing “research” and seen as an antidote to the so-called fake news of the mainstream media.

Social isolation, a barrage of changing and confusing pandemic news and obliging social media platforms have been a boon for QAnon groups. However, simply banning or deleting groups runs the danger of confirming the beliefs of QAnon followers.




Read more:
How misinformation about 5G is spreading within our government institutions – and who’s responsible


So what can be done?

Governments need to be more sensitive in their messaging and avoid triggering panic around sensitive issues such as mandatory or forced vaccinations. Transparency about government actions, policies and mistakes all help to build trust.

Governments also need to ensure they are providing enough resources to support people during this challenging time, particularly when it comes to mental and emotional well-being. Resourcing community-building to counter isolation is vital.

For families and friends, losing a loved one “down the Q rabbit hole” is distressing. Research shows that arguing over facts and myths doesn’t work.

Like many conspiracy theories, there are elements of truth in QAnon. Empathy and compassion, rather than ridicule and ostracism, are the keys to remaining connected to the Q follower in your life. Hopefully, with time, they’ll come back.The Conversation

Kaz Ross, Lecturer in Humanities (Asian Studies), University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus misinformation is a global issue, but which myth you fall for likely depends on where you live


Jason Weismueller, University of Western Australia; Jacob Shapiro, Princeton University; Jan Oledan, Princeton University, and Paul Harrigan, University of Western Australia

In February, major social media platforms attended a meeting hosted by the World Health Organisation to address coronavirus misinformation. The aim was to catalyse the fight against what the United Nations has called an “infodemic”.

Usually, misinformation is focused on specific regions and topics. But COVID-19 is different. For what seems like the first time, both misinformation and fact-checking behaviours are coordinated around a common set of narratives the world over.

In our research, we identified the key trends in both coronavirus misinformation and fact-checking efforts. Using Google’s Fact Check Explorer computing interface we tracked fact-check posts from January to July – with the first checks appearing as early as January 22.

Google’s Fact Check Explorer database is connected with a range of fact-checkers, most of which are part of the Poynter Institute’s International Fact-Checking Network.
Screenshot

A uniform rate of growth

Our research found the volume of fact-checks on coronavirus misinformation increased steadily in the early stages of the virus’s spread (January and February) and then increased sharply in March and April – when the virus started to spread globally.

Interestingly, we found the same pattern of gradual and then sudden increase even after dividing fact-checks into Spanish, Hindi, Indonesian and Portuguese.

Thus, misinformation and subsequent fact-checking efforts trended in a similar way right across the globe. This is a unique feature of COVID-19.

According to our analysis, there has been no equivalent global trend for other issues such as elections, terrorism, police activity or immigration.

Different nations, different misconceptions

On March 16, the Empirical Studies of Conflict Project, in collaboration with Microsoft Research, began cataloguing COVID-19 misinformation.

It did this by collating news articles with reporting by a wide range of local fact-checking networks and global groups such as Agence France-Presse and NewsGuard.

We analysed this data set to explore the evolution of specific COVID-19 narratives, with “narrative” referring to the type of story a piece of misinformation pushes.

For instance, one misinformation narrative concerns the “origin of the virus”. This includes the false claim the virus jumped to humans as a result of someone eating bat soup.




Read more:
The Conversation’s FactCheck granted accreditation by International Fact-Checking Network at Poynter


We found the most common narrative worldwide was related to “emergency responses”. These stories reported false information about government or political responses to fighting the virus’s outbreak.

This may be because, unlike narratives surrounding the “nature of the virus”, it is easy to speculate on (and hard to prove) whether people in power have good or ill intent.

Notably, this was also the most common narrative in the US, with an early example being a false rumour the New York Police Department would immediately lock down New York City.

What’s more, a major motivation for spreading misinformation on social media is politics. The US is a polarised political environment, so this might help explain the trend towards political misinformation.

We also found China has more misinformation narratives than any other country. This may be because China is the world’s most populous country.

However, it’s worth noting the main fact-checking website used by the Empirical Studies of Conflict Project for misinformation coming out of China is run by the Chinese Communist Party.

This chart shows the proportion of total misinformation narratives on COVID-19 by the top ten countries between January and July, 2020.

When fighting misinformation, it is important to have as wide a range of independent and transparent fact-checkers as possible. This reduces the potential for bias.

Hydroxychloroquine and other (non) ‘cures’

Another set of misinformation narratives was focused on “false cures” or “false preventative measures”. This was among the most common themes in both China and Australia.

One example was a video that went viral on social media suggesting hydroxychloroquine is an effective coronavirus treatment. This is despite experts stating it is not a proven COVID-19 treatment, and can actually have harmful side effects.

Myths about the “nature of the virus” were also common. These referred to specific characteristics of the virus – such as that it can’t spread on surfaces. We know this isn’t true.




Read more:
We know how long coronavirus survives on surfaces. Here’s what it means for handling money, food and more


Narratives reflect world events

Our analysis found different narratives peaked at different stages of the virus’s spread.

Misinformation about the nature of the virus was prevalent during the outbreak’s early stages, probably spurred by an initial lack of scientific research regarding the nature of the virus.

In contrast, theories relating to emergency responses surfaced later and remain even now, as governments continue to implement measures to fight COVID-19’s spread.

A wide variety of fact-checkers

We also identified greater diversity in websites fact-checking COVID-19 misinformation, compared to those investigating other topics.

Since January, only 25% of 6,000 fact-check posts or articles were published by the top five fact-checking websites (ranked by number of posts). In comparison, 68% of 3,000 climate change fact-checks were published by the top five websites.




Read more:
5 ways to help stop the ‘infodemic,’ the increasing misinformation about coronavirus


It seems resources previously devoted to a wide range of topics are now homing in on coronavirus misinformation. Nonetheless, it’s impossible to know the total volume of this content online.

For now, the best defence is for governments and online platforms to increase awareness about false claims and build on the robust fact-checking infrastructures at our disposal.The Conversation

Jason Weismueller, Doctoral Researcher, University of Western Australia; Jacob Shapiro, Professor of Politics and International Affairs, Princeton University; Jan Oledan, Research Specialist, Princeton University, and Paul Harrigan, Associate Professor of Marketing, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Young men are more likely to believe COVID-19 myths. So how do we actually reach them?



Shutterstock

Carissa Bonner, University of Sydney; Brooke Nickel, University of Sydney, and Kristen Pickles, University of Sydney

If the media is anything to go by, you’d think people who believe coronavirus myths are white, middle-aged women called Karen.

But our new study shows a different picture. We found men and people aged 18-25 are more likely to believe COVID-19 myths. We also found an increase among people from a non-English speaking background.

While we’ve heard recently about the importance of public health messages reaching people whose first language isn’t English, we’ve heard less about reaching young men.




Read more:
We asked multicultural communities how best to communicate COVID-19 advice. Here’s what they told us


What did we find?

Sydney Health Literacy Lab has been running a national COVID-19 survey of more than 1,000 social media users each month since Australia’s first lockdown.

A few weeks in, our initial survey showed younger people and men were more likely to think the benefit of herd immunity was covered up, and the threat of COVID-19 was exaggerated.

People who agreed with such statements were less likely to want to receive a future COVID-19 vaccine.




Read more:
The ‘herd immunity’ route to fighting coronavirus is unethical and potentially dangerous


In June, after restrictions eased, we asked social media users about more specific myths. We found:

  • men and younger people were more likely to believe prevention myths, such as hot temperatures or UV light being able to kill the virus that causes COVID-19

  • people with lower education and more social disadvantage were more likely to believe causation myths, such as 5G being used to spread the virus

  • younger people were more likely to believe cure myths, such as vitamin C and hydroxychloroquine being effective treatments.

We need more targeted research with young Australians, and men in particular, about why some of them believe these myths and what might change their mind.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


Although our research has yet to be formally peer-reviewed, it reflects what other researchers have found, both in Australia and internationally.

An Australian poll in May found similar patterns, in which men and younger people believed a range of myths more than other groups.

In the UK, younger people are more likely to hold conspiracy beliefs about COVID-19. American men are also more likely to agree with COVID-19 conspiracy theories than women.

Why is it important to reach this demographic?

We need to reach young people with health messaging for several reasons. In Australia, young people:

The Victorian and New South Wales premiers have appealed to young people to limit socialising.

But is this enough when young people are losing interest in COVID-19 news? How many 20-year-old men follow Daniel Andrews on Twitter, or watch Gladys Berejiklian on television?

How can we reach young people?

We need to involve young people in the design of COVID-19 messages to get the delivery right, if we are to convince them to socialise less and follow prevention advice. We need to include them rather than blame them.

We can do this by testing our communications on young people or running consumer focus groups before releasing them to the public. We can include young people on public health communications teams.

We can also borrow strategies from marketing. For example, we know how tobacco companies use social media to effectively target young people. Paying popular influencers on platforms such as TikTok to promote reliable information is one option.




Read more:
Most adults have never heard of TikTok. That’s by design


We can target specific communities to reach young men who might not access mainstream media, for instance, gamers who have many followers on YouTube.

We also know humour can be more effective than serious messages to counteract science myths.

Some great examples

There are social media campaigns happening right now to address COVID-19, which might reach more young men than traditional public health methods.

NSW Health has recently started a campaign #Itest4NSW encouraging young people to upload videos to social media in support of COVID-19 testing.

The United Nations is running the global Verified campaign involving an army of volunteers to help spread more reliable information on social media. This may be a way to reach private groups on WhatsApp and Facebook Messenger, where misinformation spreads under the radar.

Telstra is using Australian comedian Mark Humphries to address 5G myths in a satirical way (although this would probably have more credibility if it didn’t come from a vested interest).

Telstra is using comedian Mark Humphries to dispel 5G coronavirus myths.

Finally, tech companies like Facebook are partnering with health organisations to flag misleading content and prioritise more reliable information. But this is just a start to address the huge problem of misinformation in health.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


But we need more

We can’t expect young men to access reliable COVID-19 messages from people they don’t know, through media they don’t use. To reach them, we need to build new partnerships with the influencers they trust and the social media companies that control their information.

It’s time to change our approach to public health communication, to counteract misinformation and ensure all communities can access, understand and act on reliable COVID-19 prevention advice.The Conversation

Carissa Bonner, Research Fellow, University of Sydney; Brooke Nickel, Postdoctoral research fellow, University of Sydney, and Kristen Pickles, Postdoctoral Research Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How misinformation about 5G is spreading within our government institutions – and who’s responsible



Aris Oikonomou/EPA

Michael Jensen, University of Canberra

“Fake news” is not just a problem of misleading or false claims on fringe websites, it is increasingly filtering into the mainstream and has the potential to be deeply destructive.

My recent analysis of more than 500 public submissions to a parliamentary committee on the launch of 5G in Australia shows just how pervasive misinformation campaigns have become at the highest levels of government. A significant number of the submissions peddled inaccurate claims about the health effects of 5G.

These falsehoods were prominent enough the committee felt compelled to address the issue in its final report. The report noted:

community confidence in 5G has been shaken by extensive misinformation
preying on the fears of the public spread via the internet, and presented as facts, particularly through social media.

This is a remarkable situation for Australian public policy – it is not common for a parliamentary inquiry to have to rebut the dodgy scientific claims it receives in the form of public submissions.

While many Australians might dismiss these claims as fringe conspiracy theories, the reach of this misinformation matters. If enough people act on the basis of these claims, it can cause harm to the wider public.

In late May, for example, protests against 5G, vaccines and COVID-19 restrictions were held in Sydney, Melbourne and Brisbane. Some protesters claimed 5G was causing COVID-19 and the pandemic was a hoax – a “plandemic” – perpetuated to enslave and subjugate the people to the state.




Read more:
Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking


Misinformation can also lead to violence. Last year, the FBI for the first time identified conspiracy theory-driven extremists as a terrorism threat.

Conspiracy theories that 5G causes autism, cancer and COVID-19 have also led to widespread arson attacks in the UK and Canada, along with verbal and physical attacks on employees of telecommunication companies.

The source of conspiracy messaging

To better understand the nature and origins of the misinformation campaigns against 5G in Australia, I examined the 530 submissions posted online to the parliament’s standing committee on communications and the arts.

The majority of submissions were from private citizens. A sizeable number, however, made claims about the health effects of 5G, parroting language from well-known conspiracy theory websites.

A perceived lack of “consent” (for example, here, here and here) about the planned 5G roll-out featured prominently in these submissions. One person argued she did not agree to allow 5G to be “delivered directly into” the home and “radiate” her family.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


To connect sentiments like this to conspiracy groups, I looked at two well-known conspiracy sites that have been identified as promoting narratives consistent with Russian misinformation operations – the Centre for Research on Globalization (CRG) and Zero Hedge.

CRG is an organisation founded and directed by Michel Chossudovsky, a former professor at the University of Ottawa and opinion writer for Russia Today.

CRG has been flagged by NATO intelligence as part of wider efforts to undermine trust in “government and public institutions” in North America and Europe.

Zero Hedge, which is registered in Bulgaria, attracts millions of readers every month and ranks among the top 500 sites visited in the US. Most stories are geared toward an American audience.

Researchers at Rand have connected Zero Hedge with online influencers and other media sites known for advancing pro-Kremlin narratives, such as the claim that Ukraine, and not Russia, is to blame for the downing of Malaysia Airlines flight MH17.

Protesters targeting the coronavirus lockdown and 5G in Melbourne in May.
Scott Barbour/AAP

How it was used in parliamentary submissions

For my research, I scoured the top posts circulated by these groups on Facebook for false claims about the health threats posed by 5G. Some stories I found had headlines like “13 Reasons 5G Wireless Technology will be a Catastrophe for Humanity” and “Hundreds of Respected Scientists Sound Alarm about Health Effects as 5G Networks go Global”.

I then tracked the diffusion of these stories on Facebook and identified 10 public groups where they were posted. Two of the groups specifically targeted Australians – Australians for Safe Technology, a group with 48,000 members, and Australia Uncensored. Many others, such as the popular right-wing conspiracy group QAnon, also contained posts about the 5G debate in Australia.




Read more:
Conspiracy theories about 5G networks have skyrocketed since COVID-19


To determine the similarities in phrasing between the articles posted on these Facebook groups and submissions to the Australian parliamentary committee, I used the same technique to detect similarities in texts that is commonly used to detect plagiarism in student papers.

The analysis rates similarities in documents on a scale of 0 (entirely dissimilar) to 1 (exactly alike). There were 38 submissions with at least a 0.5 similarity to posts in the Facebook group 5G Network, Microwave Radiation Dangers and other Health Problems and 35 with a 0.5 similarity to the Australians for Safe Technology group.

This is significant because it means that for these 73 submissions, 50% of the language was, word for word, exactly the same as the posts from extreme conspiracy groups on Facebook.

The first 5G Optus tower in the suburb of Dickson in Canberra.
Mick Tsikas/AAP

The impact of misinformation on policy-making

The process for soliciting submissions to a parliamentary inquiry is an important part of our democracy. In theory, it provides ordinary citizens and organisations with a voice in forming policy.

My findings suggest Facebook conspiracy groups and potentially other conspiracy sites are attempting to co-opt this process to directly influence the way Australians think about 5G.

In the pre-internet age, misinformation campaigns often had limited reach and took a significant amount of time to spread. They typically required the production of falsified documents and a sympathetic media outlet. Mainstream news would usually ignore such stories and few people would ever read them.

Today, however, one only needs to create a false social media account and a meme. Misinformation can spread quickly if it is amplified through online trolls and bots.

It can also spread quickly on Facebook, with its algorithm designed to drive ordinary users to extremist groups and pages by exploiting their attraction to divisive content.

And once this manipulative content has been widely disseminated, countering it is like trying to put toothpaste back in the tube.

Misinformation has the potential to undermine faith in governments and institutions and make it more challenging for authorities to make demonstrable improvements in public life. This is why governments need to be more proactive in effectively communicating technical and scientific information, like details about 5G, to the public.

Just as nature abhors a vacuum, a public sphere without trusted voices quickly becomes filled with misinformation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can I trust this map? 4 questions to ask when you see a map of the coronavirus pandemic



Shutterstock

Amy Griffin, RMIT University

Maps have shown us how the events of this disastrous year have played out around the globe, from the Australian bushfires to the spread of the COVID-19 pandemic. But there are good reasons to question the maps we see.

Some of these reasons have been explored recently through maps of the bushfires or those created from satellite images.

Maps often inform our actions, but how do we know which ones are trustworthy? My research shows that answering this question may be critically important for the world’s most urgent challenge: the COVID-19 pandemic.




Read more:
Satellite imagery is revolutionizing the world. But should we always trust what we see?


Why are trustworthy maps important?

Maps guide decisions, including those made by governments, private companies, and individual citizens. During the pandemic, government restrictions on activities to protect public health have been strongly informed by maps.

Governments rely on public cooperation with the restrictions, and they have used maps to explain the situation and build trust. If people don’t trust information from the government, they may be less likely to comply with the restrictions.

This highlights the importance of trustworthy COVID-19 maps. Maps can be untrustworthy when they don’t show the most relevant or timely information or because they show information in a misleading way.

Below are a few question you should ask yourself to work out whether you should trust a map you read.

What information is being mapped?

The number of cases of COVID-19 is an important piece of information. But that number could just reflect how many people are being tested. If you don’t know how much testing is being done, you can misjudge the level of risk.

Low case numbers might mean that there isn’t much testing being done. If the percentage of positive cases (positive test rate) is high, we might be missing cases. So not accounting for the number of tests can be misleading.

The World Health Organization suggests that at least ten negative tests to one positive test, a positive test rate of at most 10%, is the lowest rate of testing that is adequate.

In Australia, we have been at the forefront of making sure we are doing enough testing and we are confident that we are identifying most of the cases. Undertesting has been a problem in some other countries.

How is the information being mapped?

It’s not just the numbers that matter. How the numbers are shown is also important so that map readers get an accurate picture of what we know.

The Victorian Government recently advised Melburnians to avoid travel to and from several local council areas because of high case numbers. But their publicly available map does not show this clearly.

Compare the government-produced map with a map of the same data mapped differently. Most people interpret light as few cases and dark as more cases. The government-produced map uses dark colours for both low and high numbers of cases.

Active COVID-19 Cases in Victoria, 22 June 2020, ©State of Victoria 2020.
Victorian Government Department of Health and Human Services

Who made this map and why did they make it?

Maps can inform, misinform, and disinform, like any other information source. So it is important to pay attention to the map’s context as well as the author.

Viral maps are maps that spread quickly and widely, often via social media. Viral maps cannot always be trusted, even when they come from a reputable source. Maps that are trustworthy in one context may not be in another.

An example from Australian news media in February shows this. Several media outlets showed a map that was tweeted by UK researchers. The tweet announced the publication of their new paper about COVID-19.

The media reported the map showed locations to which COVID-19 had spread from Wuhan, China, the origin of the outbreak. It actually depicted airline flight routes, and was used in the tweet to illustrate how globally linked the world is. The map was from a 2012 study not the 2020 study.

Original tweeted map that went viral and was picked up by many news outlets, © WorldPopProject.
WorldPopProject, archived on the Wayback Machine

Many readers may have trusted that reporting because their justifiable anxiety about COVID-19 was reinforced by the map’s design choices. The mass of overlapping red symbols creates a powerful and alarming impression.

While the lines in the map indicate potential routes for virus spread, it doesn’t provide evidence that the did virus spread along all of these routes. The researchers didn’t claim that it did. But without understanding why the map was made and what it showed, several media outlets reported it inaccurately.

Maps on social media are especially likely to be missing important context and explanation. The airline route map was re-shared many times as in the tweet below, often without any source information, making it hard to check its trustworthiness.

Limiting the damage done by COVID-19 is a very substantial challenge. Maps can help ordinary citizens to work together with governments to achieve that outcome. But they need to be made and read with care. Ask yourself what is being mapped, how it’s being mapped, who made the map and why they made it.The Conversation

Amy Griffin, Senior Lecturer, Geospatial Sciences, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why is it so hard to stop COVID-19 misinformation spreading on social media?




Tobias R. Keller, Queensland University of Technology and Rosalie Gillett, Queensland University of Technology

Even before the coronavirus arrived to turn life upside down and trigger a global infodemic, social media platforms were under growing pressure to curb the spread of misinformation.

Last year, Facebook cofounder and chief executive Mark Zuckerberg called for new rules to address “harmful content, election integrity, privacy and data portability”.

Now, amid a rapidly evolving pandemic, when more people than ever are using social media for news and information, it is more crucial than ever that people can trust this content.




Read more:
Social media companies are taking steps to tamp down coronavirus misinformation – but they can do more


Digital platforms are now taking more steps to tackle misinformation about COVID-19 on their services. In a joint statement, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have pledged to work together to combat misinformation.

Facebook has traditionally taken a less proactive approach to countering misinformation. A commitment to protecting free expression has led the platform to allow misinformation in political advertising.

More recently, however, Facebook’s spam filter inadvertently marked legitimate news information about COVID-19 as spam. While Facebook has since fixed the mistake, this incident demonstrated the limitations of automated moderation tools.

In a step in the right direction, Facebook is allowing national ministries of health and reliable organisations to advertise accurate information on COVID-19 free of charge. Twitter, which prohibits political advertising, is allowing links to the Australian Department of Health and World Health Organization websites.

Twitter is directing users to trustworthy information.
Twitter.com

Twitter has also announced a suite of changes to its rules, including updates to how it defines harm so as to address content that goes against authoritative public health information, and an increase in its use of machine learning and automation technologies to detect and remove potentially abusive and manipulative content.

Previous attempts unsuccessful

Unfortunately, Twitter has been unsuccessful in its recent attempts to tackle misinformation (or, more accurately, disinformation – incorrect information posted deliberately with an intent to obfuscate).

The platform has begun to label doctored videos and photos as “manipulated media”. The crucial first test of this initiative was a widely circulated altered video of Democratic presidential candidate Joe Biden, in which part of a sentence was edited out to make it sound as if he was forecasting President Donald Trump’s re-election.

A screenshot of the tweet featuring the altered video of Joe Biden, with Twitter’s label.
Twitter

It took Twitter 18 hours to label the video, by which time it had already received 5 million views and 21,000 retweets.

The label appeared below the video (rather than in a more prominent place), and was only visible to the roughly 757,000 accounts who followed the video’s original poster, White House social media director Dan Scavino. Users who saw the content via reweets from the White House (21 million followers) or President Donald Trump (76 million followers), did not see the label.

Labelling misinformation doesn’t work

There are four key reasons why Twitter’s (and other platforms’) attempts to label misinformation were ineffective.

First, social media platforms tend to use automated algorithms for these tasks, because they scale well. But labelling manipulated tweets requires human labour; algorithms cannot decipher complex human interactions. Will social media platforms invest in human labour to solve this issue? The odds are long.

Second, tweets can be shared millions of times before being labelled. Even if removed, they can easily be edited and then reposted to avoid algorithmic detection.

Third, and more fundamentally, labels may even be counterproductive, serving only to pique the audience’s interest. Conversely, labels may actually amplify misinformation rather than curtailing it.

Finally, the creators of deceptive content can deny their content was an attempt to obfuscate, and claim unfair censorship, knowing that they will find a sympathetic audience within the hyper-partisan arena of social media.

So how can we beat misinformation?

The situation might seem impossible, but there are some practical strategies that the media, social media platforms, and the public can use.

First, unless the misinformation has already reached a wide audience, avoid drawing extra attention to it. Why give it more oxygen than it deserves?

Second, if misinformation has reached the point at which it requires debunking, be sure to stress the facts rather than simply fanning the flames. Refer to experts and trusted sources, and use the “truth sandwich”, in which you state the truth, and then the misinformation, and finally restate the truth again.

Third, social media platforms should be more willing to remove or restrict unreliable content. This might include disabling likes, shares and retweets for particular posts, and banning users who repeatedly misinform others.

For example, Twitter recently removed coronavirus misinformation posted by Rudy Guilani and Charlie Kirk; the Infowars app was removed from Google’s app store; and probably with the highest impact, Facebook, Twitter, and Google’s YouTube removed corona misinformation from Brasil’s president Jair Bolsonaro.




Read more:
Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots


Finally, all of us, as social media users, have a crucial role to play in combating misinformation. Before sharing something, think carefully about where it came from. Verify the source and its evidence, double-check with independent other sources, and report suspicious content to the platform directly. Now, more than ever, we need information we can trust.The Conversation

Tobias R. Keller, Visiting Postdoc, Queensland University of Technology and Rosalie Gillett, Research Associate in Digital Platform Regulation, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How not to fall for coronavirus BS: avoid the 7 deadly sins of thought



Shutterstock

Luke Zaphir, The University of Queensland

With the COVID-19 pandemic causing a great deal of anxiety, we might come to think people are irrational, selfish or downright crazy. We see people showing up to public venues en masse or clearing supermarket shelves of toilet paper.

Experts are often ignored. We hear inconsistent information and arguments filled with fallacious reasoning being accepted by a seemingly large number people.

The answer for the kind of panicked flurry in reasoning may lie in a field of critical thinking called vice epistemology. This theory argues our thinking habits and intellectual character traits cause poor reasoning.

These thinking habits are developed over a lifetime.
When these habits are poorly developed, we can end up with intellectual vices. The more we think viciously (as a vice), the harder it is for us to effectively inquire and seek truth.

Vice epistemology points to many thinking vices and sins that cause problems for inquiry. I have chosen seven that show up regularly in the literature:

1. Sin of gullibility

I heard coronavirus particles can stay in the air for up to five days!

Researchers found SARS-CoV-2, the virus that causes the disease COVID-19, remains infectious in airborne droplets for at least three hours.

But all sorts of claims are being touted by people and we’re all guilty of having believed someone who isn’t an expert or simply doesn’t know what they’re talking about. Gullibility as a thinking sin means that we lack the ability to determine the credibility of information.




Read more:
Coronavirus: how long does it take to get sick? How infectious is it? Will you always have a fever? COVID-19 basics explained


Relevant expertise and experience are essential qualities when we’re listening to someone’s own argument. But with something like COVID-19, it’s also important we look at the type of expertise someone has. A GP might be able to tell us how we get the infection – but they wouldn’t count as an expert in infectious disease epidemiology (the way an infectious disease spreads across a population).

2. Sin of cynicism

I’d better stock up on toilet paper before everyone else buys it.

In many ways, cynicism is the opposite of gullibility. It is being overly suspicious of others in their arguments and actions.

If you’ve suddenly become suspicious of your neighbours and what they might do when supermarket stocks are limited, that’s a cynical way to think.

If we think the worst interpretation of arguments and events is correct, we can’t inquire and problem-solve effectively.

3. Sin of pride

I know what’s best for my family!

Pride is an intellectual sin (though it’s more popular as a spiritual one). In this particular case, it is the habit of not admitting to ourselves or to others that we don’t know the answer. Or perhaps that we don’t understand the issue.

We obstruct a genuine search for truth if we are dogmatic in our self-belief.

Do you think you know better than everyone else?
Shutterstock

It’s effective reasoning to take what the evidence and experts say and then apply it specifically to our individual needs. But we have gone astray in our thinking if we contradict those who know more than us and are unwilling to admit our own limitations.

4. Sin of closed-mindedness

I won’t accept that.

Closed-mindedness means we’re not willing to see things from different perspectives or accept new information. It’s a serious intellectual vice as it directly interferes with our ability to adjust our beliefs according to new information.

Worse still, being close-minded to new ideas and information means it’s even more challenging to learn and grow – we’d be closed minded to the idea that we’re closed minded.

5. Sin of prejudice

I’ve stopped buying Chinese food – just in case.

Prejudiced thinking is an intellectual vice we often start developing early in life. Children can be incredibly prejudiced in small ways – such as being unwilling to try new foods because they already somehow know they’re gross.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


As a character flaw, it means we often substitute preconceived notions for actual thinking.

6. Sin of negligence

SARS was more deadly than COVID-19 and that wasn’t that big a deal

Creating a poor analogy like this one is not a substitute for thoughtful research and considered analysis.

Still, it is difficult to explore every single topic with thorough evaluation. There’s so much information out there at the moment it can be a real chore to investigate every claim we hear.

But if we’re not willing to check the facts, we’re being negligent in our thinking.

7. Sin of wishful thinking

This will all be over in a week or two and it’ll be business as usual.

Our capacity to believe in ourselves, our hard work, our friends and culture can often blind us to hard truths.

It’s perfectly fine to aim for a certain outcome but we need to recognise it doesn’t matter how much we hope for it – our desire doesn’t affect the likelihood of it happening.




Read more:
Thinking about thinking helps kids learn. How can we teach critical thinking?


A pandemic like COVID-19 shows our way of life is fragile and can change at any moment. Wishful thinking ignores the stark realities and can set us up for disappointment.

So, what can we do about it?

There are some questions we can ask ourselves to help improve our intellectual character traits:

What would change my mind?

It’s a red flag for sin of pride if nothing will change your mind.

What is the strongest argument the other side has?

We often hold each piece of the truth in our own perspective. It’s worth keeping in mind that unless there’s wanton cruelty involved, chances are differing arguments will have some good points.

What groups would gain or lose the most if we keep thinking this way?

Sometimes we fail to consider the practical outcomes of our thoughts for people who aren’t like us. We’ve seen in the last few weeks that the people who have a lot to lose (such as casual workers) matter when it comes to the way we respond to the pandemic.

It’s worth taking a moment to consider their perspectives.

How much do you actually know about an issue? Who is an expert?

The experts always have something to say. If they agree on it, it’s a good indication we should believe them. If there isn’t general consensus, we should be dubious of one-sided claims to truth.

And remember the person’s actual expertise – it’s too easy to mistake a political leader or famous person with an expert.

In challenging days like these, we may be able to help ensure a better outcome for everyone if we start by asking ourselves a few simple questions.The Conversation

Luke Zaphir, Researcher for the University of Queensland Critical Thinking Project, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

When a virus goes viral: pros and cons to the coronavirus spread on social media



Tim Gouw/Unsplash, CC BY

Axel Bruns, Queensland University of Technology; Daniel Angus, Queensland University of Technology; Timothy Graham, Queensland University of Technology, and Tobias R. Keller, Queensland University of Technology

News and views about coronavirus has spread via social media in a way that no health emergency has done before.

Platforms like Twitter, Facebook, Tik Tok and Instagram have played critical roles in sharing news and information, but also in disseminating rumours and misinformation.

Getting the Message Out

Early on, snippets of information circulated on Chinese social media platforms such as Weibo and WeChat, before state censors banned discussions. These posts already painted a grim picture, and Chinese users continue to play cat and mouse with the Internet police in order to share unfiltered information.

As the virus spread, so did the social media conversation. On Facebook and Twitter, discussions have often taken place ahead of official announcements: calls to cancel the Australian Formula One Grand Prix were trending on Twitter days before the official decision.

Similarly, user-generated public health explainers have circulated while official government agencies in many countries discuss campaign briefs with advertising agencies.

Many will have come across (and, hopefully, adopted) hand-washing advice set to the lyrics of someone’s favourite song:

Widespread circulation of graphs has also explained the importance of “flattening the curve” and social distancing.

Debunking myths

Social media have been instrumental in responding to COVID-19 myths and misinformation. Journalists, public health experts, and users have combined to provide corrections to dangerous misinformation shared in US President Donald Trump’s press conferences:

Other posts have highlighted potentially deadly assumptions in the UK government’s herd immunity approach to the crisis:

Users have also pointed out inconsistencies in the Australian cabinet’s response to Home Affairs Minister Peter Dutton’s coronavirus diagnosis.

The circulation of such content through social media is so effective because we tend to pay more attention to information we receive through our networks of social contacts.

Similarly, professional health communicators like Dr Norman Swan have been playing an important role in answering questions and amplifying public health messages, while others have set up resources to keep the public informed on confirmed cases:

Even just seeing our leaders’ poor hygienic practices ridiculed might lead us to take better care ourselves:

Some politicians, like Australian Prime Minister Scott Morrison, blandly dismiss social media channels as a crucial source of crisis information, despite more than a decade’s research showing their importance.

This is deeply unhelpful: they should be embracing social media channels as they seek to disseminate urgent public health advice.

Stoking fear

The downside of all that user-driven sharing is that it can lead to mass panics and irrational behaviour – as we have seen with the panic-buying of toiletpaper and other essentials.

The panic spiral spins even faster when social media trends are amplified by mainstream media reporting, and vice versa: even only a handful of widely shared images of empty shelves in supermarkets might lead consumers to buy what’s left, if media reporting makes the problem appear much larger than it really is.

News stories and tweets showing empty shelves are much more news- and share-worthy than fully stocked shelves: they’re exceptional. But a focus on these pictures distorts our perception of what is actually happening.

The promotion of such biased content by the news media then creates a higher “viral” potential, and such content gains much more public attention than it otherwise would.

Levels of fear and panic are already higher during times of crisis, of course. As a result, some of us – including journalists and media outlets – might also be willing to believe new information we would otherwise treat with more scepticism. This skews the public’s risk perception and makes us much more susceptible to misinformation.

A widely shared Twitter post showed how panic buying in (famously carnivorous) Glasgow had skipped the vegan food section:

Closer inspection revealed the photo originated from Houston during Hurricane Harvey in 2017 (the dollar signs on the food prices are a giveaway).

This case also illustrates the ability of social media discussion to self-correct, though this can take time, and corrections may not travel as far as initial falsehoods. The potential for social media to stoke fears is measured by the difference in reach between the two.

The spread of true and false information is also directly affected by the platform architecture: the more public the conversations, the more likely it is that someone might encounter a falsehood and correct it.

In largely closed, private spaces like WhatsApp, or in closed groups or private profile discussions on Facebook, we might see falsehoods linger for considerably longer. A user’s willingness to correct misinformation can also be affected by their need to maintain good relationships within their community. People will often ignore misinformation shared by friends and family.

And unfortunately, the platforms’ own actions can also make things worse: this week, Facebook’s efforts to control “fake news” posts appeared to affect legitimate stories by mistake.

Rallying cries

Their ability to sustain communities is one of the great strengths of social media, especially as we are practising social distancing and even self-isolation. The internet still has a sense of humour which can help ease the ongoing tension and fear in our communities:

Younger generations are turning to newer social media platforms such as TikTok to share their experiences and craft pandemic memes. A key feature of TikTok is the uploading and repurposing of short music clips by platform users – music clip It’s Corona Time has been used in over 700,000 posts.

We have seen substantial self help efforts conducted via social media: school and university teachers who have been told to transition all of their teaching to online modes at very short notice, for example, have begun to share best-practice examples via the #AcademicTwitter hashtag.

The same is true for communities affected by event shutdowns and broader economic downturns, from freelancers to performing artists. Faced with bans on mass gatherings, some artists are finding ways to continue their work: providing access to 600 live concerts via digital concert halls or streaming concerts live on Twitter.

Such patterns are not new: we encountered them in our research as early as 2011, when social media users rallied together during natural disasters such as the Brisbane floods, Christchurch earthquakes, and Sendai tsunami to combat misinformation, amplify the messages of official emergency services organisations, and coordinate community activities.

Especially during crises, most people just want themselves and their community to be safe.The Conversation

Axel Bruns, Professor, Creative Industries, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology; Timothy Graham, Senior Lecturer, Queensland University of Technology, and Tobias R. Keller, Visiting Postdoc, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We’re in danger of drowning in a coronavirus ‘infodemic’. Here’s how we can cut through the noise



Paul Hanaoka/Unsplash

Connal Lee, University of South Australia

The novel coronavirus that has so far killed more than 1,100 people now has a name – COVID-19.

The World Health Organisation (WHO) didn’t want the name to refer to a place, animal or certain group of people and needed something pronounceable and related to the disease.

“Having a name matters to prevent the use of other names that can be inaccurate or stigmatising,” said WHO director-general Tedros Adhanom Ghebreyesus.

The organisation has been battling misinformation about the coronavirus, with some experts warning rumours are spreading more rapidly than the disease itself.




Read more:
Coronavirus fears: Should we take a deep breath?


The WHO describes the overabundance of information about the coronavirus as an “infodemic”. Some information is accurate, but much of it isn’t – and it can be difficult to tell what’s what.

What’s the problem?

Misinformation can spread unnecessary fear and panic. During the 2014 Ebola outbreak, rumours about the disease led to panic-buying, with many people purchasing Ebola virus protection kits online. These contained hazmat suits and face masks, which were unnecessary for protection against the disease.

As we’ve seen with the coronavirus, misinformation can prompt blame and stigmatisation of infected and affected groups. Since the outbreak began, Chinese Australians, who have no connection or exposure to the virus, have reported an increase in anti-Chinese language and abuse both online and on the streets.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


Misinformation can also undermine people’s willingness to follow legitimate public health advice. In extreme cases, people don’t acknowledge the disease exists, and fail to take proven precautionary measures.

In other cases, people may not seek help due to fears, misconceptions or a lack of trust in authorities.

The public may also grow bored or apathetic due to the sheer quantity of information out there.

Mode of transmission

The internet can be an ally in the fight against infectious diseases. Accurate messages about how the disease spreads and how to protect yourself and others can be distributed promptly and accessibly.

But inaccurate information spreads rapidly online. Users can find themselves inside echo chambers, embracing implausible conspiracy theories and ultimately distrusting those in charge of the emergency response.

The infodemic continues offline as information spreads via mobile phone, traditional media and in the work tearoom.

Previous outbreaks show authorities need to respond to misinformation quickly and effectively, while remaining aware that not everybody will believe the official line.

Responding to the infodemic

Last week, rumours emerged that the coronavirus was transmitted through infectious clouds in the air that people could inhale.

The WHO promptly responded to these claims, noting this was not the case. WHO’s Director of Global Infectious Hazard Preparedness, Sylvie Briand, explained:

Currently the virus is transmitted through droplets and you need a close contact to be infected.

This simple intervention demonstrates how a timely response be effective. However, it may not convince everyone.




Read more:
Coronavirus fears: Should we take a deep breath?


Official messages need to be consistent to avoid confusion and information overload. However, coordination can be difficult, as we’ve seen this week.

Potentially overly optimistic predictions have come from Chinese health officials saying the outbreak will be over by April. Meanwhile, the WHO has given dire warnings, saying the virus poses a bigger threat than terrorism.

These inconsistencies can be understandable as governments try to placate fears while the WHO encourages us to prepare for the worst.

Health authorities should keep reiterating key messages, like the importance of regularly washing your hands. This is a simple and effective measure that helps people feel in control of their own protection. But it can be easily forgotten in a sea of information.

It’s worth reminding people to regularly wash their hands.
CDC/Unsplash

A challenge is that authorities may struggle to compete with the popularity of sensationalist stories and conspiracy theories about how diseases emerge, spread and what authorities are doing in response. Conspiracies may be more enjoyable than the official line, or may help some people preserve their existing, problematic beliefs.

Sometimes a prompt response won’t successfully cut through this noise.

Censorship isn’t the answer

Although censoring a harmful view could limit its spread, it could also make that view popular. Hiding negative news or over-reassuring people can leave them vulnerable and unprepared.

Censorship and media silence during the 1918 Spanish flu, which included not releasing numbers of affected and dead, undercut the seriousness of the pandemic.

When the truth emerges, people lose trust in public institutions.

Past outbreaks illustrate that building trust and legitimacy is vital to get people to adhere to disease prevention and control measures such as quarantines. Trying to mitigate fear through censorship is problematic.

Saving ourselves from drowning in a sea of (mis)information

The internet is useful for monitoring infectious diseases outbreaks. Tracking keyword searches, for example, can detect emerging trends.

Observing online communication offers an opportunity to quickly respond to misunderstandings and to build a picture of what rumours gain the most traction.

Health authorities’ response to the infodemic should include a strategy for engaging with and even listening to those who spread or believe inaccurate stories to gain deeper understanding of how infodemics spread.The Conversation

Connal Lee, Associate Lecturer, Philosophy, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

9 ways to talk to people who spread coronavirus myths



from www.shutterstock.com

Claire Hooker, University of Sydney

The spread of misinformation about the novel coronavirus, now known as COVID-19, seems greater than the spread of the infection itself.

The World Health Organisation (WHO), government health departments and others are trying to alert people to these myths.

But what’s the best way to tackle these if they come up in everyday conversation, whether that’s face-to-face or online? Is it best to ignore them, jump in to correct them, or are there other strategies we could all use?




Read more:
The coronavirus and Chinese social media: finger-pointing in the post-truth era


Public health officials expect misinformation about disease outbreaks where people are frightened. This is particularly so when a disease is novel and the science behind it is not yet clear. It’s also the case when we still don’t know how many people are likely to become sick, have a life-threatening illness or die.

Yet we can all contribute to the safe control of the disease and to minimising its social and economic impacts by addressing misinformation when we encounter it.

To avoid our efforts backfiring, we need to know how to do this effectively and constructively.




Read more:
We depend so much more on Chinese travellers now. That makes the impact of this coronavirus novel


What doesn’t work

Abundant research shows what doesn’t work. Telling people not to panic or their perceptions and beliefs are incorrect can actually strengthen their commitment to their incorrect views.

Over-reactions are common when new risks emerge and these over-reactions will pass. So, it’s often the best choice to not engage in the first place.




Read more:
Listen up, health officials – here’s how to reduce ‘Ebolanoia’


What can I do?

If you wish to effectively counter misinformation, you need to pay more attention to your audience than to the message you want to convey. See our tips below.

Next, you need to be trusted.

People only listen to sources they trust. This involves putting in the time and effort to make sure your knowledge is correct and reliable; discussing information fairly (what kind of information would make you change your own mind?); and being honest enough to admit when you don’t know, and even more importantly, when you are wrong.

Here’s how all this might work in practice.

1. Understand how people perceive and react to risks

We all tend to worry more about risks we perceive to be new, uncertain, dreaded, and impact a large group in a short time – all features of the new coronavirus.

Our worries increase significantly if we do not feel we, or the governments acting for us, have control over the virus.




Read more:
Coronavirus fears: Should we take a deep breath?


2. Recognise people’s concerns

People can’t process information unless they see their worries being addressed.

So instead of offering facts (“you won’t catch coronavirus from your local swimming pool”), articulate their worry (“you’ve caught colds in swimming pools before, and now you’re worried someone might transmit the virus before they know they are infected”).

Being heard helps people re-establish a sense of control.




Read more:
How to cut through when talking to anti-vaxxers and anti-fluoriders


3. Be aware of your own feelings

Usually when we want to correct someone, it’s because we’re worried about the harms their false beliefs will cause.

But if we are emotional, what we communicate is not our knowledge, but our disrespect for the other person’s views. This usually produces a defensive reaction.

Manage your own outrage first before jumping in to correct others. This might mean saving a discussion for another day.




Read more:
4 ways to talk with vaccine skeptics


4. Ask why someone is worried

If you ask why someone is worried, you might discover your assumptions about that person are wrong.

Explaining their concerns to you helps people explore their own views. They might become aware of what they don’t know or of how unlikely their information sounds.




Read more:
Everyone can be an effective advocate for vaccination: here’s how


5. Remember, the facts are going to change

Because there is still considerable uncertainty about how severe the epidemic will be, information and the government’s response to it is going to change.

So you will need to frequently update your own views. Know where to find reliable information.

For instance, state and federal health departments, the WHO and the US Centers for Disease Control websites provide authoritative and up-to-date information.

6. Admit when you’re wrong

Being wrong is likely in an uncertain situation. If you are wrong, say so early.

If you asked your family or employees to take avoidance measures you now realise aren’t really necessary, then admit it and apologise. This helps restore the trust you need to communicate effectively the next time you need to raise an issue.

7. Politely provide your own perspective

Phrases like, “here’s why I am not concerned about that” or “I actually feel quite confident about doing X or Y” offer ways to communicate your knowledge without attacking someone else’s views.

You can and should be explicit about what harms you worry misinformation can cause. An example could be, “I’m worried that avoiding Chinese restaurants will really hurt their business. I’m really conscious of wanting to support Chinese Australians right now.”




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


8. On social media, model the behaviour you want to see

It’s harder to be effective on social media, where outrage, not listening, is common. Often your goal might be to promote a reasoned, civil discussion, not to defend one particular belief over another. Use very reliable links.




Read more:
False information fuels fear during disease outbreaks: there is an antidote


9. Don’t make it worse online

Your online comment can unintentionally reinforce misinformation, for example by giving it more prominence. Check the Debunking Handbook for some strategies to avoid this.

Make sure your posts or comments are polite, specific, factual and very brief.

Acknowledging common values or points of connection by using phrases such as “I’m worried about my grandmother, too”, or by being supportive (“It’s so great that you’re proactive about looking after your staff”), can help.

Remember why this is important

The ability to respond to emergencies rests on having civil societies. The goal is to keep relationships constructive and dialogue open – not to be right.The Conversation

Claire Hooker, Senior Lecturer and Coordinator, Health and Medical Humanities, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.