How do we counter COVID misinformation? Challenge it directly with the facts



Mary Altaffer/AP

Adam Dunn, University of Sydney

The government is rolling out a new public information campaign this week to reassure the public about the safety of COVID-19 vaccines, which one expert has said “couldn’t be more crucial” to people actually getting the jabs when they are available.

Access to vaccines is the most important barrier to widespread immunisations, so this campaign should go a long way toward getting the right people vaccinated at the right time.

But it also comes as government ministers — and even the prime minister — have refused to address the COVID-19 misinformation coming from those within their own ranks.

Despite advice from the Therapeutic Goods Administration explaining that hydroxychloroquine is not an effective treatment for COVID-19, MP Craig Kelly has continued to promote the opposite on Facebook. A letter he wrote on the same topic, bearing the Commonwealth coat of arms was also widely distributed.

He has also incorrectly advocated the use of the anti-parasitic drug ivermectin as a treatment for COVID-19, and encouraged people to protest against what he called “health bureaucrats in an ivory tower”.

Compared to health experts, politicians and celebrities tend to have access to larger and more diverse audiences, particularly on social media. But politicians and celebrities may not always have the appraisal skills they need to assess clinical evidence.

I spend much of my time examining how researchers introduce biases into the design and reporting of trials and systematic reviews. Kelly probably has less experience in critically appraising trial design and reporting. But if he and I were competing for attention among Australians, his opinions would certainly reach a much larger and varied segment of the population.

Does misinformation really cause harm?

According to a recent Quantum Market Research survey of 1,000 people commissioned by the Department of Health, four in five respondents said they were likely to get a COVID-19 vaccine when it’s made available.

Australia generally has high levels of vaccine confidence compared to other wealthy countries – 72% strongly agree that vaccines are safe and less than 2% strongly disagree.

But there does appear to be some hesitancy about the COVID-19 vaccine. In the Quantum survey, 27% of respondents overall, and 42% of women in their 30s, had concerns about vaccine safety. According to the report, this showed

a need to dispel some specific fears held by certain cohorts of the community in relation to potential adverse side effects.

For other types of COVID misinformation, a University of Sydney study found that younger men had stronger agreement with misconceptions and myths, such as the efficacy of hydroxychloroquine as a treatment, that 5G networks spread the virus or that the virus was engineered in a lab.

Surveys showing how attitudes and beliefs vary by demographics are useful, but it is difficult to know how exposure to misinformation affects the decisions people make about their health in the real world.




Read more:
Laws making social media firms expose major COVID myths could help Australia’s vaccine rollout


Studies measuring what happens to people’s behaviours after misinformation reaches a mainstream audience are rare. One study from 2015 looked at the effect of an ABC Catalyst episode that misrepresented evidence about cholesterol-lowering drugs — it found fewer people filled their statin prescriptions after the show.

When it comes to COVID-19, researchers are only starting to understand the influence of misinformation on people’s behaviours.

After public discussion about using bleach to potentially treat COVID-19, for instance, the number of internet searches about injecting and drinking disinfectants increased. This was followed by a spike in the number of calls to poison control phone lines for disinfectant-related injuries.

As vaccine roll-outs accelerate around the world, concern is growing about vaccine hesitancy among certain groups.
Peter Dejong/AP

Does countering misinformation online work?

The aim of countering misinformation is not to change the opinions of the people posting it, but to reduce misperceptions among the often silent audience. Public health organisations promoting the benefits of vaccinations on social media consider this when they decide to engage with anti-vaccine posts.

A study published this month by two American researchers, Emily Vraga and Leticia Bode, tested the effect of posting an infographic correction in response to misinformation about the science of a false COVID-19 prevention method. They found a bot developed with the World Health Organization and Facebook was able to reduce misperceptions by posting factual responses to misinformation when it appeared.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


A common concern about correcting misinformation in this way is that it might cause a backfire effect, leading people to become more entrenched in misinformed beliefs. But research shows the backfire effect appears to be much rarer than first thought.

Vraga and Bode found no evidence of a backfire effect in their study. Their results suggest that responding to COVID-19 misinformation with factual information is likely to do more good than harm.

So, what’s the best strategy?

Social media platforms can address COVID-19 misinformation by simply removing or labelling posts and deplatforming users who post it.

This is probably most effective in situations where the user posting the misinformation has a small audience. In these cases, responding to misinformation with facts in a more direct way may be a waste of time and could unintentionally amplify the post.

When misinformation is shared by people like Kelly who are in positions of power and influence, removing those posts is like cutting a head off a hydra. It doesn’t stop the spread of misinformation at the source and more of the same will likely fill the void left behind.




Read more:
Most government information on COVID-19 is too hard for the average Australian to understand


In these instances, governments and organisations should consider directly countering misinformation where it occurs. To do this effectively, they need to consider the size of the audience, respond to the misinformation and not the person, and present evidence in simple and engaging ways.

The government’s current campaign fills an important gap in providing simple and clear information about who should get vaccinated and how. It doesn’t directly address the misinformation problem, but I think this would be the wrong place for that kind of effort, anyway.

Instead, research suggests it might be better to directly challenge misinformation where it appears. Rather than demanding the deplatforming of the people who post misinformation, we might instead think of it as an opportunity to correct misperceptions in front of the audiences that really need it.The Conversation

Adam Dunn, Associate professor, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

3 fallacies that blighted this year’s COVID commentary — have you fallen foul of any of them?


Rachael L. Brown, Australian National University

Throughout the pandemic we have seen a deluge of outright lies, conspiracy theories and pseudoscience from various peddlers of self-interest.

But to a philosopher like me, more vexing than these calculated cases of disinformation has been the amount of sloppy reasoning in public discourse about Australia’s COVID epidemic.

Barely a day goes by without a politician, official or commentator making the kind of basic failure of critical thinking that I teach first-year philosophy undergraduates to avoid.

While these are sometimes deliberate attempts to obfuscate, it is more frequently the well-intentioned who fall victim to these often appealing fallacies. The only antidote is a large dose of scepticism, mixed with some understanding of where our reasoning frequently goes wrong.

Here are three critical thinking errors that were rife in 2020.

Fallacy 1: false comparisons

In arguing against lockdowns, it was not uncommon to hear people decry the “hidden cost” of public health measures designed to curb the virus’s spread. Commonly cited examples include drops in cancer detection or the negative impacts of school closures, particularly on students from disadvantaged backgrounds.

It is certainly reasonable to ask whether the costs of lockdown outweigh the benefits. But any such reckoning needs to factor in the costs of not imposing a lockdown.

It is a mistake to use the “pre-COVID normal” as the baseline for comparison. We’re not in Kansas any more, Toto. Pre-COVID cancer rates or school grades are irrelevant when thinking about the impact of public health measures in our current circumstances.

What is relevant is the expected outcomes given the impact of the COVID infections that would occur without public health measures in place. In the case of cancer detection, for example, we should expect a drop in diagnoses relative to pre-COVID levels both with, and without, lockdowns in place. During a pandemic, the fear of infection creates a significant extra factor that would make people less likely to visit their doctor for a cancer check.

Similarly, when looking at the impact of school closures, particularly on socioeconomically vulnerable students, we need to factor in the likely impact of increased COVID infections. As has been shown both at home and abroad, the impacts of COVID outbreaks are disproportionately felt by disadvantaged communities.




Read more:
The costs of the shutdown are overestimated — they’re outweighed by its $1 trillion benefit


Fallacy 2: failing to see the nuance behind the numbers

Victorians were understandably glued to the daily case numbers during their epic lockdown, while their New South Wales neighbours nervously kept an eye on their own tally. But the focus on numbers can mislead; bald case numbers don’t tell the whole story.

Why, for example, did two such similar states have such contrasting fortunes? Behind the headline numbers were some key differences that can explain why Victoria endured a major second wave, while NSW escaped relatively unscathed. Not all of them involve differences in contact-tracing capacity.

To illustrate, despite similar absolute case numbers over the ten days to October 14, about 60% of the cases in NSW were returned international travellers, compared with none in Victoria. Given that a positive case in hotel quarantine is easier to contain than one at large among the public, Victoria clearly faced a more challenging situation than NSW.

Similarly, there are other features of the demographics of the Victorian outbreak that also set it apart from NSW, such as the average size of the households in which infected individuals live and the source of their infections. The devil is in the detail.




Read more:
Finally at zero new cases, Victoria is on top of the world after unprecedented lockdown effort


Fallacy 3: thinking everything happens for a reason

The ancient Greeks blamed unexpected bad outcomes in their lives on Tykhe, the goddess of chance, and the Romans similarly blamed Fortuna. In our largely secular modern world, however, we typically assume a bad outcome to be a sign of failure rather than simple bad luck.

But in a pandemic, not only can relatively small differences in situations lead to large differences in outcomes, but these small differences often come down to dumb luck. This is especially true when talking about very small numbers of cases, as we have in Australia now.

At such low numbers, bad luck and chance are likely to play a big role in our fortunes. South Australia, for instance, may have been plunged into lockdown as a result of dodgy ventilation in a hotel corridor.

It is easy to interpret any jump in case numbers as indicating a failure of the public health measures in place. But this overlooks the role of other factors: whether a COVID-positive person lives with one other person or six, or whether they work in aged care, or from home, where they shop, whether or not they developed symptoms while infected, and whether or not they self-isolated as a result. All of this can make a significant difference to the potential number of others whom they infect with the virus.

It is also harder to trace the contacts of someone working outside the home, compared with someone working from home and only leaving to go to the shops once a week. No two infections are truly equal.




Read more:
Exponential growth in COVID cases would overwhelm any state’s contact tracing. Australia needs an automated system


This doesn’t mean we shouldn’t be concerned by a sudden spike in cases, and it doesn’t mean we can’t ask questions about what went wrong. But it also doesn’t mean it necessarily warrants any shift from our current public health measures.

It’s an uncomfortable thought, but luck is a huge part of where we find ourselves today, and where we could be in the future.The Conversation

Rachael L. Brown, Director of the Centre for Philosophy of the Sciences and Senior Lecturer at the School of Philosophy, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

To stay or cut away? As Trump makes baseless claims, TV networks are faced with a serious dilemma



Evan Vucci/AP

Denis Muller, University of Melbourne

In the United States, democratic norms are breaking down.

The president, Donald Trump, baselessly claimed at a White House press conference on Friday morning, Australian time, that the presidential election has been stolen from him by fraudulent and corrupt electoral processes.

This confronted the television networks, whose job is to report the news, with an acute dilemma.

In an already volatile political atmosphere, do they go on reporting these lies, laced with an undertone of veiled incitement to violence? Or do they cut away on the grounds that by continuing to broadcast this stuff, they are helping to propagate lies and perhaps to oxygenate a threat to the civil peace?

Major networks tune out

Many of the major networks — MSNBC, NBC News, CNBC, CBS News and ABC News — decided to cut away. So did National Public Radio.

MSNBC presenter Brian Williams said of Trump’s speech:

It was not rooted in reality and at this point, where our country is, it’s dangerous.

CNBC presenter, Shepard Smith, said the network was not going to allow it to keep going because what Trump was saying was not true.

CNN and Rupert Murdoch’s Fox News broadcast Trump’s entire press conference but immediately afterwards challenged what he said. CNN’s fact-checker Daniel Dale said it had been the most “dishonest” speech Trump had ever given, with anchor Jake Tapper saying Trump’s statements were “pathetic” and “a feast of falsehoods”.

Fox’s host Martha MacCallum said the supposed evidence and proof of election misconduct would need to be produced.

Even Murdoch’s New York Post, which had endorsed Trump’s re-election, accused him of making “baseless” election fraud claims, quoting a Republican Congressman as saying they were “insane”.

The Washington Post carried two news stories on its front page, clearly calling out Trump’s lies: “Falsehood upon falsehood”; “A speech of historic dishonesty”.

A serious decision to silence the President

But what of the networks’ decision to cut away?

Silencing a public official in the course of his official duties is a very serious abrogation of the media’s duty in a democracy.

But so is allowing the airwaves to be used in such a way as to arouse fears for public confidence in the democratic process and — as MSNBC’s Williams argued — even public safety.

Donald Trump giving his White House press conference.
Caption text.
Shawn Thew/ EPA

On the run, many of the big networks prioritised public confidence in the democratic process, and public safety, over the reporting of the president’s words.

It is a rare circumstance in any democratic society that the media are placed in the position of having to shoulder such a heavy burden of responsibility.

It is most unlikely that once the present crisis is over, assuming Democrat candidate Joe Biden wins, the American media will find themselves in this position again.

Even so, a Rubicon has been crossed. A president of the United States, a publicly elected official, has been silenced by significant elements of the professional mass media in the course of his public duties.




Read more:
Grattan on Friday: A Biden presidency would put pressure on Scott Morrison over climate change


This was done principally on the grounds he was lying to the people in circumstances where there was a foreseeable risk of serious harm to the body politic, and there was no practicable way to reduce the risk.

Is that a standard the media is prepared to set for the future? If so, it would be giving itself a power that goes well beyond anything the media has claimed for itself up till now.

Journalists need to keep their nerve

In considering this, two questions arise.

What if all media outlets had adopted this course? No one except those at the White House press conference would have known the whole of what Trump said, seen the context and observed the demeanour with which he said it.

Would it have been enough to do as CNN and Fox did — report the speech and then repudiate it?




Read more:
5 types of misinformation to watch out for while ballots are being counted – and after


An answer to that would be: the lies were coming so thick and fast, and were so damaging to the public interest, that it would have been impossible to set the record straight in anything like real time.

Real-time fact-checking is a relatively new development, and a welcome one. But its feasibility should not be a criterion for deciding whether to publish breaking news, unless there is doubt about whether the breaking news is actually happening.

The networks that cut away doubtless acted in good faith to do right by the country. Trump’s speech was shocking and irresponsible.

Trump supporters protest in Detroit.
Trump supporters have taken to the streets since the polls closed on November 3.
Nicole Hester/AP

However, American democracy is in crisis. At this time, above all, the public needs the institution of the fourth estate to keep its nerve and a clear head.

A primary norm of journalism is to inform the public. That certainly means being fair and accurate. But if the news contains lies, the norm is to publish and then call out the lying and set the record straight as soon as possible.

The networks need to explain to their audiences their reasoning behind the decision to cut away, and the media as a whole need to realise that if the norms of journalism break down, that just adds to the tragic chaos into which their country has descended.The Conversation

Denis Muller, Senior Research Fellow, Centre for Advancing Journalism, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why QAnon is attracting so many followers in Australia — and how it can be countered



SCOTT BARBOUR/AAP

Kaz Ross, University of Tasmania

On September 5, a coalition of online groups are planning an Australia-wide action called the “Day of Freedom”. The organisers claim hundreds of thousands will join them on the streets in defiance of restrictions on group gatherings and mask-wearing mandates.

Some online supporters believe Stage 5 lockdown will be introduced in Melbourne the following week and the “Day of Freedom” is the last chance for Australians to stand up to an increasingly tyrannical government.

The action is the latest in a series of protests in Australia against the government’s COVID-19 restrictions. The main issues brought up during these protests centre around 5G, government surveillance, freedom of movement and, of course, vaccinations.

And one general conspiracy theory now unites these disparate groups — QAnon.




Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans


Why QAnon has exploded in popularity globally

Since its inception in the US in late 2017, QAnon has morphed beyond a specific, unfounded claim about President Donald Trump working with special counsel Robert Mueller to expose a paedophile ring supposedly run by Bill and Hillary Clinton and the “deep state”. Now, it is an all-encompassing world of conspiracies.

QAnon conspiracy theories now include such wild claims as Microsoft founder Bill Gates using coronavirus as a cover to implant microchips in people, to governments erecting 5G towers during lockdown to surveil the population.

Donald Trump has tacitly endorsed QAnon, saying its followers
Leah Millis/Reuters

Last week, Facebook deleted over 790 groups, 100 pages and 1,500 ads tied to QAnon and restricted the accounts of hundreds of other Facebook groups and thousands of Instagram accounts. QAnon-related newsfeed rankings and search results were also downgraded.

Facebook is aiming to reduce the organising ability of the QAnon community, but so far such crackdowns seem to have had little effect on the spread of misinformation.

In July, Twitter removed 7,000 accounts, but the QAnon conspiracy has become even more widespread since then. A series of global “save the children” protests in the last few weeks is proof of how resilient and adaptable the community is.

Why Australians are turning to QAnon in large numbers

QAnon encourages people to look for evidence of conspiracies in the media and in government actions. Looking back over the last several years, we can see a range of events or conspiracy theories that have helped QAnon appeal to increasing numbers of followers in Australia.

1) Conspiracies about global governance

In 2015, Senator Malcolm Roberts claimed the UN’s 1992 “Agenda 21” plan for sustainable development as a foreign global plan aimed at depriving nations of their sovereignty and citizens of their property rights.

The belief that “Agenda 21” is a blueprint for corrupt global governance has become a core tenet of QAnon in Australia.

Any talk of “global bankers and cabals” directly taps into longstanding anti-Semitic conspiracies about supposed Jewish world domination often centred on the figure of billionaire George Soros. The pandemic and QAnon have also proven to be fertile ground for neo-Nazis in Australia.

2) Impact of the far-right social media

QAnon has its roots on the far-right bulletin boards of the websites 4Chan and 8Chan. Other campaigns from the same sources, such as the “It’s OK to be White” motion led by One Nation leader Pauline Hanson in the Senate, have been remarkably successful in Australia, showing our susceptibility to viral trolling efforts.

3) Perceived paedophiles in power

During the Royal Commission into Institutional Responses to Child Abuse, Senator Bill Heffernan tried unsuccessfully to submit the names of 28 prominent Australians which he alleged were paedophiles.

His failure is widely shared in QAnon circles as proof of a cover-up of child abuse at all levels of Australian government. The belief the country is run by a corrupt paedophile cabal is the most fundamental plank of the QAnon platform.

Among the QAnon conspiracy theories in the US is that Hollywood actors have engaged in crimes against children.
CHRISTIAN MONTERROSA/EPA

4) Increasingly ‘unaccountable and incompetent’ governments

A number of recent events have eroded public trust in government — from the “sports rorts affair” to the Witness K case — and all serve to further fuel the QAnon suspicion of authority figures.

5) Longstanding alternative health lobbies

Australia’s sizeable anti-vax movement has found great support in the QAnon community. Fear about mandatory vaccinations is widespread, as is a distrust of “big pharma”.

Also, the continuing roll-out of 5G technology throughout the pandemic has confirmed the belief among QAnon followers that there are ulterior motives for the lockdown. Wellness influencers such as celebrity chef Pete Evans have amplified these messages to their millions of followers.

6) The ‘plandemic’ and weaponising of COVID-19

In the QAnon world, debates about the origin of the coronavirus, death rates, definition of cases, testing protocols and possible treatments are underpinned by a belief that governments are covering up the truth. Many believe the virus isn’t real or deadly, or it was deliberately introduced to hasten government control of populations.

Understanding QAnon followers

Understanding why people become part of these movements is the key to stopping the spread of the QAnon virus. Research into extremist groups shows four elements are important:

1) Real or perceived personal and collective grievances

This year, some of these grievances have been linked directly to the pandemic: government lockdown restrictions, a loss of income, fear about the future and disruption of plans such as travel.

2) Networks and personal ties

Social media has given people the ability to find others with similar grievances or beliefs, to share doubts and concerns and to learn about connecting theories and explanations for what may be troubling them.

3) Political and religious ideologies

QAnon is very hierarchically structured, similar to evangelical Christianity. QAnon followers join a select group of truth seekers who are following the “light” and have a duty to wake up the “sheeple”. Like some religions, the QAnon world is welcoming to all and provides a strong sense of community united by a noble purpose and hope for a better future.

4) Enabling environments and support structures

In the QAnon world, spending many hours on social media is valued as doing “research” and seen as an antidote to the so-called fake news of the mainstream media.

Social isolation, a barrage of changing and confusing pandemic news and obliging social media platforms have been a boon for QAnon groups. However, simply banning or deleting groups runs the danger of confirming the beliefs of QAnon followers.




Read more:
How misinformation about 5G is spreading within our government institutions – and who’s responsible


So what can be done?

Governments need to be more sensitive in their messaging and avoid triggering panic around sensitive issues such as mandatory or forced vaccinations. Transparency about government actions, policies and mistakes all help to build trust.

Governments also need to ensure they are providing enough resources to support people during this challenging time, particularly when it comes to mental and emotional well-being. Resourcing community-building to counter isolation is vital.

For families and friends, losing a loved one “down the Q rabbit hole” is distressing. Research shows that arguing over facts and myths doesn’t work.

Like many conspiracy theories, there are elements of truth in QAnon. Empathy and compassion, rather than ridicule and ostracism, are the keys to remaining connected to the Q follower in your life. Hopefully, with time, they’ll come back.The Conversation

Kaz Ross, Lecturer in Humanities (Asian Studies), University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus misinformation is a global issue, but which myth you fall for likely depends on where you live


Jason Weismueller, University of Western Australia; Jacob Shapiro, Princeton University; Jan Oledan, Princeton University, and Paul Harrigan, University of Western Australia

In February, major social media platforms attended a meeting hosted by the World Health Organisation to address coronavirus misinformation. The aim was to catalyse the fight against what the United Nations has called an “infodemic”.

Usually, misinformation is focused on specific regions and topics. But COVID-19 is different. For what seems like the first time, both misinformation and fact-checking behaviours are coordinated around a common set of narratives the world over.

In our research, we identified the key trends in both coronavirus misinformation and fact-checking efforts. Using Google’s Fact Check Explorer computing interface we tracked fact-check posts from January to July – with the first checks appearing as early as January 22.

Google’s Fact Check Explorer database is connected with a range of fact-checkers, most of which are part of the Poynter Institute’s International Fact-Checking Network.
Screenshot

A uniform rate of growth

Our research found the volume of fact-checks on coronavirus misinformation increased steadily in the early stages of the virus’s spread (January and February) and then increased sharply in March and April – when the virus started to spread globally.

Interestingly, we found the same pattern of gradual and then sudden increase even after dividing fact-checks into Spanish, Hindi, Indonesian and Portuguese.

Thus, misinformation and subsequent fact-checking efforts trended in a similar way right across the globe. This is a unique feature of COVID-19.

According to our analysis, there has been no equivalent global trend for other issues such as elections, terrorism, police activity or immigration.

Different nations, different misconceptions

On March 16, the Empirical Studies of Conflict Project, in collaboration with Microsoft Research, began cataloguing COVID-19 misinformation.

It did this by collating news articles with reporting by a wide range of local fact-checking networks and global groups such as Agence France-Presse and NewsGuard.

We analysed this data set to explore the evolution of specific COVID-19 narratives, with “narrative” referring to the type of story a piece of misinformation pushes.

For instance, one misinformation narrative concerns the “origin of the virus”. This includes the false claim the virus jumped to humans as a result of someone eating bat soup.




Read more:
The Conversation’s FactCheck granted accreditation by International Fact-Checking Network at Poynter


We found the most common narrative worldwide was related to “emergency responses”. These stories reported false information about government or political responses to fighting the virus’s outbreak.

This may be because, unlike narratives surrounding the “nature of the virus”, it is easy to speculate on (and hard to prove) whether people in power have good or ill intent.

Notably, this was also the most common narrative in the US, with an early example being a false rumour the New York Police Department would immediately lock down New York City.

What’s more, a major motivation for spreading misinformation on social media is politics. The US is a polarised political environment, so this might help explain the trend towards political misinformation.

We also found China has more misinformation narratives than any other country. This may be because China is the world’s most populous country.

However, it’s worth noting the main fact-checking website used by the Empirical Studies of Conflict Project for misinformation coming out of China is run by the Chinese Communist Party.

This chart shows the proportion of total misinformation narratives on COVID-19 by the top ten countries between January and July, 2020.

When fighting misinformation, it is important to have as wide a range of independent and transparent fact-checkers as possible. This reduces the potential for bias.

Hydroxychloroquine and other (non) ‘cures’

Another set of misinformation narratives was focused on “false cures” or “false preventative measures”. This was among the most common themes in both China and Australia.

One example was a video that went viral on social media suggesting hydroxychloroquine is an effective coronavirus treatment. This is despite experts stating it is not a proven COVID-19 treatment, and can actually have harmful side effects.

Myths about the “nature of the virus” were also common. These referred to specific characteristics of the virus – such as that it can’t spread on surfaces. We know this isn’t true.




Read more:
We know how long coronavirus survives on surfaces. Here’s what it means for handling money, food and more


Narratives reflect world events

Our analysis found different narratives peaked at different stages of the virus’s spread.

Misinformation about the nature of the virus was prevalent during the outbreak’s early stages, probably spurred by an initial lack of scientific research regarding the nature of the virus.

In contrast, theories relating to emergency responses surfaced later and remain even now, as governments continue to implement measures to fight COVID-19’s spread.

A wide variety of fact-checkers

We also identified greater diversity in websites fact-checking COVID-19 misinformation, compared to those investigating other topics.

Since January, only 25% of 6,000 fact-check posts or articles were published by the top five fact-checking websites (ranked by number of posts). In comparison, 68% of 3,000 climate change fact-checks were published by the top five websites.




Read more:
5 ways to help stop the ‘infodemic,’ the increasing misinformation about coronavirus


It seems resources previously devoted to a wide range of topics are now homing in on coronavirus misinformation. Nonetheless, it’s impossible to know the total volume of this content online.

For now, the best defence is for governments and online platforms to increase awareness about false claims and build on the robust fact-checking infrastructures at our disposal.The Conversation

Jason Weismueller, Doctoral Researcher, University of Western Australia; Jacob Shapiro, Professor of Politics and International Affairs, Princeton University; Jan Oledan, Research Specialist, Princeton University, and Paul Harrigan, Associate Professor of Marketing, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Young men are more likely to believe COVID-19 myths. So how do we actually reach them?



Shutterstock

Carissa Bonner, University of Sydney; Brooke Nickel, University of Sydney, and Kristen Pickles, University of Sydney

If the media is anything to go by, you’d think people who believe coronavirus myths are white, middle-aged women called Karen.

But our new study shows a different picture. We found men and people aged 18-25 are more likely to believe COVID-19 myths. We also found an increase among people from a non-English speaking background.

While we’ve heard recently about the importance of public health messages reaching people whose first language isn’t English, we’ve heard less about reaching young men.




Read more:
We asked multicultural communities how best to communicate COVID-19 advice. Here’s what they told us


What did we find?

Sydney Health Literacy Lab has been running a national COVID-19 survey of more than 1,000 social media users each month since Australia’s first lockdown.

A few weeks in, our initial survey showed younger people and men were more likely to think the benefit of herd immunity was covered up, and the threat of COVID-19 was exaggerated.

People who agreed with such statements were less likely to want to receive a future COVID-19 vaccine.




Read more:
The ‘herd immunity’ route to fighting coronavirus is unethical and potentially dangerous


In June, after restrictions eased, we asked social media users about more specific myths. We found:

  • men and younger people were more likely to believe prevention myths, such as hot temperatures or UV light being able to kill the virus that causes COVID-19

  • people with lower education and more social disadvantage were more likely to believe causation myths, such as 5G being used to spread the virus

  • younger people were more likely to believe cure myths, such as vitamin C and hydroxychloroquine being effective treatments.

We need more targeted research with young Australians, and men in particular, about why some of them believe these myths and what might change their mind.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


Although our research has yet to be formally peer-reviewed, it reflects what other researchers have found, both in Australia and internationally.

An Australian poll in May found similar patterns, in which men and younger people believed a range of myths more than other groups.

In the UK, younger people are more likely to hold conspiracy beliefs about COVID-19. American men are also more likely to agree with COVID-19 conspiracy theories than women.

Why is it important to reach this demographic?

We need to reach young people with health messaging for several reasons. In Australia, young people:

The Victorian and New South Wales premiers have appealed to young people to limit socialising.

But is this enough when young people are losing interest in COVID-19 news? How many 20-year-old men follow Daniel Andrews on Twitter, or watch Gladys Berejiklian on television?

How can we reach young people?

We need to involve young people in the design of COVID-19 messages to get the delivery right, if we are to convince them to socialise less and follow prevention advice. We need to include them rather than blame them.

We can do this by testing our communications on young people or running consumer focus groups before releasing them to the public. We can include young people on public health communications teams.

We can also borrow strategies from marketing. For example, we know how tobacco companies use social media to effectively target young people. Paying popular influencers on platforms such as TikTok to promote reliable information is one option.




Read more:
Most adults have never heard of TikTok. That’s by design


We can target specific communities to reach young men who might not access mainstream media, for instance, gamers who have many followers on YouTube.

We also know humour can be more effective than serious messages to counteract science myths.

Some great examples

There are social media campaigns happening right now to address COVID-19, which might reach more young men than traditional public health methods.

NSW Health has recently started a campaign #Itest4NSW encouraging young people to upload videos to social media in support of COVID-19 testing.

The United Nations is running the global Verified campaign involving an army of volunteers to help spread more reliable information on social media. This may be a way to reach private groups on WhatsApp and Facebook Messenger, where misinformation spreads under the radar.

Telstra is using Australian comedian Mark Humphries to address 5G myths in a satirical way (although this would probably have more credibility if it didn’t come from a vested interest).

Telstra is using comedian Mark Humphries to dispel 5G coronavirus myths.

Finally, tech companies like Facebook are partnering with health organisations to flag misleading content and prioritise more reliable information. But this is just a start to address the huge problem of misinformation in health.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


But we need more

We can’t expect young men to access reliable COVID-19 messages from people they don’t know, through media they don’t use. To reach them, we need to build new partnerships with the influencers they trust and the social media companies that control their information.

It’s time to change our approach to public health communication, to counteract misinformation and ensure all communities can access, understand and act on reliable COVID-19 prevention advice.The Conversation

Carissa Bonner, Research Fellow, University of Sydney; Brooke Nickel, Postdoctoral research fellow, University of Sydney, and Kristen Pickles, Postdoctoral Research Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How misinformation about 5G is spreading within our government institutions – and who’s responsible



Aris Oikonomou/EPA

Michael Jensen, University of Canberra

“Fake news” is not just a problem of misleading or false claims on fringe websites, it is increasingly filtering into the mainstream and has the potential to be deeply destructive.

My recent analysis of more than 500 public submissions to a parliamentary committee on the launch of 5G in Australia shows just how pervasive misinformation campaigns have become at the highest levels of government. A significant number of the submissions peddled inaccurate claims about the health effects of 5G.

These falsehoods were prominent enough the committee felt compelled to address the issue in its final report. The report noted:

community confidence in 5G has been shaken by extensive misinformation
preying on the fears of the public spread via the internet, and presented as facts, particularly through social media.

This is a remarkable situation for Australian public policy – it is not common for a parliamentary inquiry to have to rebut the dodgy scientific claims it receives in the form of public submissions.

While many Australians might dismiss these claims as fringe conspiracy theories, the reach of this misinformation matters. If enough people act on the basis of these claims, it can cause harm to the wider public.

In late May, for example, protests against 5G, vaccines and COVID-19 restrictions were held in Sydney, Melbourne and Brisbane. Some protesters claimed 5G was causing COVID-19 and the pandemic was a hoax – a “plandemic” – perpetuated to enslave and subjugate the people to the state.




Read more:
Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking


Misinformation can also lead to violence. Last year, the FBI for the first time identified conspiracy theory-driven extremists as a terrorism threat.

Conspiracy theories that 5G causes autism, cancer and COVID-19 have also led to widespread arson attacks in the UK and Canada, along with verbal and physical attacks on employees of telecommunication companies.

The source of conspiracy messaging

To better understand the nature and origins of the misinformation campaigns against 5G in Australia, I examined the 530 submissions posted online to the parliament’s standing committee on communications and the arts.

The majority of submissions were from private citizens. A sizeable number, however, made claims about the health effects of 5G, parroting language from well-known conspiracy theory websites.

A perceived lack of “consent” (for example, here, here and here) about the planned 5G roll-out featured prominently in these submissions. One person argued she did not agree to allow 5G to be “delivered directly into” the home and “radiate” her family.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


To connect sentiments like this to conspiracy groups, I looked at two well-known conspiracy sites that have been identified as promoting narratives consistent with Russian misinformation operations – the Centre for Research on Globalization (CRG) and Zero Hedge.

CRG is an organisation founded and directed by Michel Chossudovsky, a former professor at the University of Ottawa and opinion writer for Russia Today.

CRG has been flagged by NATO intelligence as part of wider efforts to undermine trust in “government and public institutions” in North America and Europe.

Zero Hedge, which is registered in Bulgaria, attracts millions of readers every month and ranks among the top 500 sites visited in the US. Most stories are geared toward an American audience.

Researchers at Rand have connected Zero Hedge with online influencers and other media sites known for advancing pro-Kremlin narratives, such as the claim that Ukraine, and not Russia, is to blame for the downing of Malaysia Airlines flight MH17.

Protesters targeting the coronavirus lockdown and 5G in Melbourne in May.
Scott Barbour/AAP

How it was used in parliamentary submissions

For my research, I scoured the top posts circulated by these groups on Facebook for false claims about the health threats posed by 5G. Some stories I found had headlines like “13 Reasons 5G Wireless Technology will be a Catastrophe for Humanity” and “Hundreds of Respected Scientists Sound Alarm about Health Effects as 5G Networks go Global”.

I then tracked the diffusion of these stories on Facebook and identified 10 public groups where they were posted. Two of the groups specifically targeted Australians – Australians for Safe Technology, a group with 48,000 members, and Australia Uncensored. Many others, such as the popular right-wing conspiracy group QAnon, also contained posts about the 5G debate in Australia.




Read more:
Conspiracy theories about 5G networks have skyrocketed since COVID-19


To determine the similarities in phrasing between the articles posted on these Facebook groups and submissions to the Australian parliamentary committee, I used the same technique to detect similarities in texts that is commonly used to detect plagiarism in student papers.

The analysis rates similarities in documents on a scale of 0 (entirely dissimilar) to 1 (exactly alike). There were 38 submissions with at least a 0.5 similarity to posts in the Facebook group 5G Network, Microwave Radiation Dangers and other Health Problems and 35 with a 0.5 similarity to the Australians for Safe Technology group.

This is significant because it means that for these 73 submissions, 50% of the language was, word for word, exactly the same as the posts from extreme conspiracy groups on Facebook.

The first 5G Optus tower in the suburb of Dickson in Canberra.
Mick Tsikas/AAP

The impact of misinformation on policy-making

The process for soliciting submissions to a parliamentary inquiry is an important part of our democracy. In theory, it provides ordinary citizens and organisations with a voice in forming policy.

My findings suggest Facebook conspiracy groups and potentially other conspiracy sites are attempting to co-opt this process to directly influence the way Australians think about 5G.

In the pre-internet age, misinformation campaigns often had limited reach and took a significant amount of time to spread. They typically required the production of falsified documents and a sympathetic media outlet. Mainstream news would usually ignore such stories and few people would ever read them.

Today, however, one only needs to create a false social media account and a meme. Misinformation can spread quickly if it is amplified through online trolls and bots.

It can also spread quickly on Facebook, with its algorithm designed to drive ordinary users to extremist groups and pages by exploiting their attraction to divisive content.

And once this manipulative content has been widely disseminated, countering it is like trying to put toothpaste back in the tube.

Misinformation has the potential to undermine faith in governments and institutions and make it more challenging for authorities to make demonstrable improvements in public life. This is why governments need to be more proactive in effectively communicating technical and scientific information, like details about 5G, to the public.

Just as nature abhors a vacuum, a public sphere without trusted voices quickly becomes filled with misinformation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can I trust this map? 4 questions to ask when you see a map of the coronavirus pandemic



Shutterstock

Amy Griffin, RMIT University

Maps have shown us how the events of this disastrous year have played out around the globe, from the Australian bushfires to the spread of the COVID-19 pandemic. But there are good reasons to question the maps we see.

Some of these reasons have been explored recently through maps of the bushfires or those created from satellite images.

Maps often inform our actions, but how do we know which ones are trustworthy? My research shows that answering this question may be critically important for the world’s most urgent challenge: the COVID-19 pandemic.




Read more:
Satellite imagery is revolutionizing the world. But should we always trust what we see?


Why are trustworthy maps important?

Maps guide decisions, including those made by governments, private companies, and individual citizens. During the pandemic, government restrictions on activities to protect public health have been strongly informed by maps.

Governments rely on public cooperation with the restrictions, and they have used maps to explain the situation and build trust. If people don’t trust information from the government, they may be less likely to comply with the restrictions.

This highlights the importance of trustworthy COVID-19 maps. Maps can be untrustworthy when they don’t show the most relevant or timely information or because they show information in a misleading way.

Below are a few question you should ask yourself to work out whether you should trust a map you read.

What information is being mapped?

The number of cases of COVID-19 is an important piece of information. But that number could just reflect how many people are being tested. If you don’t know how much testing is being done, you can misjudge the level of risk.

Low case numbers might mean that there isn’t much testing being done. If the percentage of positive cases (positive test rate) is high, we might be missing cases. So not accounting for the number of tests can be misleading.

The World Health Organization suggests that at least ten negative tests to one positive test, a positive test rate of at most 10%, is the lowest rate of testing that is adequate.

In Australia, we have been at the forefront of making sure we are doing enough testing and we are confident that we are identifying most of the cases. Undertesting has been a problem in some other countries.

How is the information being mapped?

It’s not just the numbers that matter. How the numbers are shown is also important so that map readers get an accurate picture of what we know.

The Victorian Government recently advised Melburnians to avoid travel to and from several local council areas because of high case numbers. But their publicly available map does not show this clearly.

Compare the government-produced map with a map of the same data mapped differently. Most people interpret light as few cases and dark as more cases. The government-produced map uses dark colours for both low and high numbers of cases.

Active COVID-19 Cases in Victoria, 22 June 2020, ©State of Victoria 2020.
Victorian Government Department of Health and Human Services

Who made this map and why did they make it?

Maps can inform, misinform, and disinform, like any other information source. So it is important to pay attention to the map’s context as well as the author.

Viral maps are maps that spread quickly and widely, often via social media. Viral maps cannot always be trusted, even when they come from a reputable source. Maps that are trustworthy in one context may not be in another.

An example from Australian news media in February shows this. Several media outlets showed a map that was tweeted by UK researchers. The tweet announced the publication of their new paper about COVID-19.

The media reported the map showed locations to which COVID-19 had spread from Wuhan, China, the origin of the outbreak. It actually depicted airline flight routes, and was used in the tweet to illustrate how globally linked the world is. The map was from a 2012 study not the 2020 study.

Original tweeted map that went viral and was picked up by many news outlets, © WorldPopProject.
WorldPopProject, archived on the Wayback Machine

Many readers may have trusted that reporting because their justifiable anxiety about COVID-19 was reinforced by the map’s design choices. The mass of overlapping red symbols creates a powerful and alarming impression.

While the lines in the map indicate potential routes for virus spread, it doesn’t provide evidence that the did virus spread along all of these routes. The researchers didn’t claim that it did. But without understanding why the map was made and what it showed, several media outlets reported it inaccurately.

Maps on social media are especially likely to be missing important context and explanation. The airline route map was re-shared many times as in the tweet below, often without any source information, making it hard to check its trustworthiness.

Limiting the damage done by COVID-19 is a very substantial challenge. Maps can help ordinary citizens to work together with governments to achieve that outcome. But they need to be made and read with care. Ask yourself what is being mapped, how it’s being mapped, who made the map and why they made it.The Conversation

Amy Griffin, Senior Lecturer, Geospatial Sciences, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why is it so hard to stop COVID-19 misinformation spreading on social media?




Tobias R. Keller, Queensland University of Technology and Rosalie Gillett, Queensland University of Technology

Even before the coronavirus arrived to turn life upside down and trigger a global infodemic, social media platforms were under growing pressure to curb the spread of misinformation.

Last year, Facebook cofounder and chief executive Mark Zuckerberg called for new rules to address “harmful content, election integrity, privacy and data portability”.

Now, amid a rapidly evolving pandemic, when more people than ever are using social media for news and information, it is more crucial than ever that people can trust this content.




Read more:
Social media companies are taking steps to tamp down coronavirus misinformation – but they can do more


Digital platforms are now taking more steps to tackle misinformation about COVID-19 on their services. In a joint statement, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have pledged to work together to combat misinformation.

Facebook has traditionally taken a less proactive approach to countering misinformation. A commitment to protecting free expression has led the platform to allow misinformation in political advertising.

More recently, however, Facebook’s spam filter inadvertently marked legitimate news information about COVID-19 as spam. While Facebook has since fixed the mistake, this incident demonstrated the limitations of automated moderation tools.

In a step in the right direction, Facebook is allowing national ministries of health and reliable organisations to advertise accurate information on COVID-19 free of charge. Twitter, which prohibits political advertising, is allowing links to the Australian Department of Health and World Health Organization websites.

Twitter is directing users to trustworthy information.
Twitter.com

Twitter has also announced a suite of changes to its rules, including updates to how it defines harm so as to address content that goes against authoritative public health information, and an increase in its use of machine learning and automation technologies to detect and remove potentially abusive and manipulative content.

Previous attempts unsuccessful

Unfortunately, Twitter has been unsuccessful in its recent attempts to tackle misinformation (or, more accurately, disinformation – incorrect information posted deliberately with an intent to obfuscate).

The platform has begun to label doctored videos and photos as “manipulated media”. The crucial first test of this initiative was a widely circulated altered video of Democratic presidential candidate Joe Biden, in which part of a sentence was edited out to make it sound as if he was forecasting President Donald Trump’s re-election.

A screenshot of the tweet featuring the altered video of Joe Biden, with Twitter’s label.
Twitter

It took Twitter 18 hours to label the video, by which time it had already received 5 million views and 21,000 retweets.

The label appeared below the video (rather than in a more prominent place), and was only visible to the roughly 757,000 accounts who followed the video’s original poster, White House social media director Dan Scavino. Users who saw the content via reweets from the White House (21 million followers) or President Donald Trump (76 million followers), did not see the label.

Labelling misinformation doesn’t work

There are four key reasons why Twitter’s (and other platforms’) attempts to label misinformation were ineffective.

First, social media platforms tend to use automated algorithms for these tasks, because they scale well. But labelling manipulated tweets requires human labour; algorithms cannot decipher complex human interactions. Will social media platforms invest in human labour to solve this issue? The odds are long.

Second, tweets can be shared millions of times before being labelled. Even if removed, they can easily be edited and then reposted to avoid algorithmic detection.

Third, and more fundamentally, labels may even be counterproductive, serving only to pique the audience’s interest. Conversely, labels may actually amplify misinformation rather than curtailing it.

Finally, the creators of deceptive content can deny their content was an attempt to obfuscate, and claim unfair censorship, knowing that they will find a sympathetic audience within the hyper-partisan arena of social media.

So how can we beat misinformation?

The situation might seem impossible, but there are some practical strategies that the media, social media platforms, and the public can use.

First, unless the misinformation has already reached a wide audience, avoid drawing extra attention to it. Why give it more oxygen than it deserves?

Second, if misinformation has reached the point at which it requires debunking, be sure to stress the facts rather than simply fanning the flames. Refer to experts and trusted sources, and use the “truth sandwich”, in which you state the truth, and then the misinformation, and finally restate the truth again.

Third, social media platforms should be more willing to remove or restrict unreliable content. This might include disabling likes, shares and retweets for particular posts, and banning users who repeatedly misinform others.

For example, Twitter recently removed coronavirus misinformation posted by Rudy Guilani and Charlie Kirk; the Infowars app was removed from Google’s app store; and probably with the highest impact, Facebook, Twitter, and Google’s YouTube removed corona misinformation from Brasil’s president Jair Bolsonaro.




Read more:
Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots


Finally, all of us, as social media users, have a crucial role to play in combating misinformation. Before sharing something, think carefully about where it came from. Verify the source and its evidence, double-check with independent other sources, and report suspicious content to the platform directly. Now, more than ever, we need information we can trust.The Conversation

Tobias R. Keller, Visiting Postdoc, Queensland University of Technology and Rosalie Gillett, Research Associate in Digital Platform Regulation, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How not to fall for coronavirus BS: avoid the 7 deadly sins of thought



Shutterstock

Luke Zaphir, The University of Queensland

With the COVID-19 pandemic causing a great deal of anxiety, we might come to think people are irrational, selfish or downright crazy. We see people showing up to public venues en masse or clearing supermarket shelves of toilet paper.

Experts are often ignored. We hear inconsistent information and arguments filled with fallacious reasoning being accepted by a seemingly large number people.

The answer for the kind of panicked flurry in reasoning may lie in a field of critical thinking called vice epistemology. This theory argues our thinking habits and intellectual character traits cause poor reasoning.

These thinking habits are developed over a lifetime.
When these habits are poorly developed, we can end up with intellectual vices. The more we think viciously (as a vice), the harder it is for us to effectively inquire and seek truth.

Vice epistemology points to many thinking vices and sins that cause problems for inquiry. I have chosen seven that show up regularly in the literature:

1. Sin of gullibility

I heard coronavirus particles can stay in the air for up to five days!

Researchers found SARS-CoV-2, the virus that causes the disease COVID-19, remains infectious in airborne droplets for at least three hours.

But all sorts of claims are being touted by people and we’re all guilty of having believed someone who isn’t an expert or simply doesn’t know what they’re talking about. Gullibility as a thinking sin means that we lack the ability to determine the credibility of information.




Read more:
Coronavirus: how long does it take to get sick? How infectious is it? Will you always have a fever? COVID-19 basics explained


Relevant expertise and experience are essential qualities when we’re listening to someone’s own argument. But with something like COVID-19, it’s also important we look at the type of expertise someone has. A GP might be able to tell us how we get the infection – but they wouldn’t count as an expert in infectious disease epidemiology (the way an infectious disease spreads across a population).

2. Sin of cynicism

I’d better stock up on toilet paper before everyone else buys it.

In many ways, cynicism is the opposite of gullibility. It is being overly suspicious of others in their arguments and actions.

If you’ve suddenly become suspicious of your neighbours and what they might do when supermarket stocks are limited, that’s a cynical way to think.

If we think the worst interpretation of arguments and events is correct, we can’t inquire and problem-solve effectively.

3. Sin of pride

I know what’s best for my family!

Pride is an intellectual sin (though it’s more popular as a spiritual one). In this particular case, it is the habit of not admitting to ourselves or to others that we don’t know the answer. Or perhaps that we don’t understand the issue.

We obstruct a genuine search for truth if we are dogmatic in our self-belief.

Do you think you know better than everyone else?
Shutterstock

It’s effective reasoning to take what the evidence and experts say and then apply it specifically to our individual needs. But we have gone astray in our thinking if we contradict those who know more than us and are unwilling to admit our own limitations.

4. Sin of closed-mindedness

I won’t accept that.

Closed-mindedness means we’re not willing to see things from different perspectives or accept new information. It’s a serious intellectual vice as it directly interferes with our ability to adjust our beliefs according to new information.

Worse still, being close-minded to new ideas and information means it’s even more challenging to learn and grow – we’d be closed minded to the idea that we’re closed minded.

5. Sin of prejudice

I’ve stopped buying Chinese food – just in case.

Prejudiced thinking is an intellectual vice we often start developing early in life. Children can be incredibly prejudiced in small ways – such as being unwilling to try new foods because they already somehow know they’re gross.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


As a character flaw, it means we often substitute preconceived notions for actual thinking.

6. Sin of negligence

SARS was more deadly than COVID-19 and that wasn’t that big a deal

Creating a poor analogy like this one is not a substitute for thoughtful research and considered analysis.

Still, it is difficult to explore every single topic with thorough evaluation. There’s so much information out there at the moment it can be a real chore to investigate every claim we hear.

But if we’re not willing to check the facts, we’re being negligent in our thinking.

7. Sin of wishful thinking

This will all be over in a week or two and it’ll be business as usual.

Our capacity to believe in ourselves, our hard work, our friends and culture can often blind us to hard truths.

It’s perfectly fine to aim for a certain outcome but we need to recognise it doesn’t matter how much we hope for it – our desire doesn’t affect the likelihood of it happening.




Read more:
Thinking about thinking helps kids learn. How can we teach critical thinking?


A pandemic like COVID-19 shows our way of life is fragile and can change at any moment. Wishful thinking ignores the stark realities and can set us up for disappointment.

So, what can we do about it?

There are some questions we can ask ourselves to help improve our intellectual character traits:

What would change my mind?

It’s a red flag for sin of pride if nothing will change your mind.

What is the strongest argument the other side has?

We often hold each piece of the truth in our own perspective. It’s worth keeping in mind that unless there’s wanton cruelty involved, chances are differing arguments will have some good points.

What groups would gain or lose the most if we keep thinking this way?

Sometimes we fail to consider the practical outcomes of our thoughts for people who aren’t like us. We’ve seen in the last few weeks that the people who have a lot to lose (such as casual workers) matter when it comes to the way we respond to the pandemic.

It’s worth taking a moment to consider their perspectives.

How much do you actually know about an issue? Who is an expert?

The experts always have something to say. If they agree on it, it’s a good indication we should believe them. If there isn’t general consensus, we should be dubious of one-sided claims to truth.

And remember the person’s actual expertise – it’s too easy to mistake a political leader or famous person with an expert.

In challenging days like these, we may be able to help ensure a better outcome for everyone if we start by asking ourselves a few simple questions.The Conversation

Luke Zaphir, Researcher for the University of Queensland Critical Thinking Project, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.