Victoria’s shadow treasurer, Louise Staley, is putting about suggestive questions hinting darkly at a cover-up of how Victorian Premier Daniel Andrews injured his back three months ago.
She has not produced a shred of evidence to support this suggestion, yet the exercise has gained substantial traction in the media. All the main news outlets have had to pay attention to it.
It is the kind of political chicanery that confronts responsible media with a dilemma: how to hold a public official to account without oxygenating the conspiracy theory in which she is trading.
In this case, the fake-news manufacturing process has worked like this.
A public official puts on the public record some questions that look innocuous but will be associated in some minds with a scurrilous conspiracy theory circulating on social media.
Because it is a public official putting this on the public record, it is then picked up by a journalist.
The journalist in turn asks a question about it at a press conference. In this case, the question was put to Acting Premier James Merlino on June 8.
It necessarily generates a response from Merlino and that adds a further ingredient of apparent legitimacy to the mix.
Ambulance Victoria feels it necessary to issue a statement reiterating the exact circumstances in which an ambulance was called to take Andrews to hospital.
Then the Chief Commissioner of Police, Shane Patton, feels obliged to put out a statement confirming police did not attend the scene.
All this adds to the false impression there is some real news here.
But it doesn’t make the originating material true.
The originating material remains fake, but now the conspiracy theory has accumulated many of the attributes of a real story.
However, responsible media recognise what the real story is.
The real story is the attempt by a senior state Liberal MP to manufacture fake news – so they tell this story without oxygenating the content of the fake news itself.
Australia’s professional mass media – television, radio, newspapers – have followed this course.
They have reported Staley’s raising of the conspiracy theory and her formulation of a series of questions to the government, while at the same time quoting condemnation of her antics by Merlino and others in the state government.
Even Sky News, notorious for its anti-Labor politics, has been circumspect. It has contented itself with references to a “torrent” of “amazing rumours” before retreating to safer and more familiar ground by describing Andrews as a Soviet-style paramount leader.
It reflects well on the Australian media – perhaps reinforced in their caution by the oppressiveness of Australia’s defamation laws – that they have handled this nasty outbreak of fakery with decency, accuracy and fairness.
The result is that, in this case, the manufacturing process has been cut off at the point of distribution.
For the record, Andrews slipped on wet stairs at a holiday house in Sorrento on the Mornington Peninsula on March 9, sustaining several broken ribs and a fractured vertebra. He is expected to return to work some time this month.
The government is rolling out a new public information campaign this week to reassure the public about the safety of COVID-19 vaccines, which one expert has said “couldn’t be more crucial” to people actually getting the jabs when they are available.
Access to vaccines is the most important barrier to widespread immunisations, so this campaign should go a long way toward getting the right people vaccinated at the right time.
But it also comes as government ministers — and even the prime minister — have refused to address the COVID-19 misinformation coming from those within their own ranks.
Despite advice from the Therapeutic Goods Administration explaining that hydroxychloroquine is not an effective treatment for COVID-19, MP Craig Kelly has continued to promote the opposite on Facebook. A letter he wrote on the same topic, bearing the Commonwealth coat of arms was also widely distributed.
He has also incorrectly advocated the use of the anti-parasitic drug ivermectin as a treatment for COVID-19, and encouraged people to protest against what he called “health bureaucrats in an ivory tower”.
Compared to health experts, politicians and celebrities tend to have access to larger and more diverse audiences, particularly on social media. But politicians and celebrities may not always have the appraisal skills they need to assess clinical evidence.
I spend much of my time examining how researchers introduce biases into the design and reporting of trials and systematic reviews. Kelly probably has less experience in critically appraising trial design and reporting. But if he and I were competing for attention among Australians, his opinions would certainly reach a much larger and varied segment of the population.
Does misinformation really cause harm?
According to a recent Quantum Market Research survey of 1,000 people commissioned by the Department of Health, four in five respondents said they were likely to get a COVID-19 vaccine when it’s made available.
Australia generally has high levels of vaccine confidence compared to other wealthy countries – 72% strongly agree that vaccines are safe and less than 2% strongly disagree.
But there does appear to be some hesitancy about the COVID-19 vaccine. In the Quantum survey, 27% of respondents overall, and 42% of women in their 30s, had concerns about vaccine safety. According to the report, this showed
a need to dispel some specific fears held by certain cohorts of the community in relation to potential adverse side effects.
For other types of COVID misinformation, a University of Sydney study found that younger men had stronger agreement with misconceptions and myths, such as the efficacy of hydroxychloroquine as a treatment, that 5G networks spread the virus or that the virus was engineered in a lab.
Surveys showing how attitudes and beliefs vary by demographics are useful, but it is difficult to know how exposure to misinformation affects the decisions people make about their health in the real world.
Studies measuring what happens to people’s behaviours after misinformation reaches a mainstream audience are rare. One study from 2015 looked at the effect of an ABC Catalyst episode that misrepresented evidence about cholesterol-lowering drugs — it found fewer people filled their statin prescriptions after the show.
When it comes to COVID-19, researchers are only starting to understand the influence of misinformation on people’s behaviours.
After public discussion about using bleach to potentially treat COVID-19, for instance, the number of internet searches about injecting and drinking disinfectants increased. This was followed by a spike in the number of calls to poison control phone lines for disinfectant-related injuries.
Does countering misinformation online work?
The aim of countering misinformation is not to change the opinions of the people posting it, but to reduce misperceptions among the often silent audience. Public health organisations promoting the benefits of vaccinations on social media consider this when they decide to engage with anti-vaccine posts.
A study published this month by two American researchers, Emily Vraga and Leticia Bode, tested the effect of posting an infographic correction in response to misinformation about the science of a false COVID-19 prevention method. They found a bot developed with the World Health Organization and Facebook was able to reduce misperceptions by posting factual responses to misinformation when it appeared.
A common concern about correcting misinformation in this way is that it might cause a backfire effect, leading people to become more entrenched in misinformed beliefs. But research shows the backfire effect appears to be much rarer than first thought.
Vraga and Bode found no evidence of a backfire effect in their study. Their results suggest that responding to COVID-19 misinformation with factual information is likely to do more good than harm.
So, what’s the best strategy?
Social media platforms can address COVID-19 misinformation by simply removing or labelling posts and deplatforming users who post it.
This is probably most effective in situations where the user posting the misinformation has a small audience. In these cases, responding to misinformation with facts in a more direct way may be a waste of time and could unintentionally amplify the post.
When misinformation is shared by people like Kelly who are in positions of power and influence, removing those posts is like cutting a head off a hydra. It doesn’t stop the spread of misinformation at the source and more of the same will likely fill the void left behind.
In these instances, governments and organisations should consider directly countering misinformation where it occurs. To do this effectively, they need to consider the size of the audience, respond to the misinformation and not the person, and present evidence in simple and engaging ways.
The government’s current campaign fills an important gap in providing simple and clear information about who should get vaccinated and how. It doesn’t directly address the misinformation problem, but I think this would be the wrong place for that kind of effort, anyway.
Instead, research suggests it might be better to directly challenge misinformation where it appears. Rather than demanding the deplatforming of the people who post misinformation, we might instead think of it as an opportunity to correct misperceptions in front of the audiences that really need it.
But to a philosopher like me, more vexing than these calculated cases of disinformation has been the amount of sloppy reasoning in public discourse about Australia’s COVID epidemic.
Barely a day goes by without a politician, official or commentator making the kind of basic failure of critical thinking that I teach first-year philosophy undergraduates to avoid.
While these are sometimes deliberate attempts to obfuscate, it is more frequently the well-intentioned who fall victim to these often appealing fallacies. The only antidote is a large dose of scepticism, mixed with some understanding of where our reasoning frequently goes wrong.
Here are three critical thinking errors that were rife in 2020.
It is certainly reasonable to ask whether the costs of lockdown outweigh the benefits. But any such reckoning needs to factor in the costs of not imposing a lockdown.
It is a mistake to use the “pre-COVID normal” as the baseline for comparison. We’re not in Kansas any more, Toto. Pre-COVID cancer rates or school grades are irrelevant when thinking about the impact of public health measures in our current circumstances.
What is relevant is the expected outcomes given the impact of the COVID infections that would occur without public health measures in place. In the case of cancer detection, for example, we should expect a drop in diagnoses relative to pre-COVID levels both with, and without, lockdowns in place. During a pandemic, the fear of infection creates a significant extra factor that would make people less likely to visit their doctor for a cancer check.
Similarly, when looking at the impact of school closures, particularly on socioeconomically vulnerable students, we need to factor in the likely impact of increased COVID infections. As has been shown both at home and abroad, the impacts of COVID outbreaks are disproportionately felt by disadvantaged communities.
Fallacy 2: failing to see the nuance behind the numbers
Victorians were understandably glued to the daily case numbers during their epic lockdown, while their New South Wales neighbours nervously kept an eye on their own tally. But the focus on numbers can mislead; bald case numbers don’t tell the whole story.
Why, for example, did two such similar states have such contrasting fortunes? Behind the headline numbers were some key differences that can explain why Victoria endured a major second wave, while NSW escaped relatively unscathed. Not all of them involve differences in contact-tracing capacity.
To illustrate, despite similar absolute case numbers over the ten days to October 14, about 60% of the cases in NSW were returned international travellers, compared with none in Victoria. Given that a positive case in hotel quarantine is easier to contain than one at large among the public, Victoria clearly faced a more challenging situation than NSW.
Fallacy 3: thinking everything happens for a reason
The ancient Greeks blamed unexpected bad outcomes in their lives on Tykhe, the goddess of chance, and the Romans similarly blamed Fortuna. In our largely secular modern world, however, we typically assume a bad outcome to be a sign of failure rather than simple bad luck.
But in a pandemic, not only can relatively small differences in situations lead to large differences in outcomes, but these small differences often come down to dumb luck. This is especially true when talking about very small numbers of cases, as we have in Australia now.
It is easy to interpret any jump in case numbers as indicating a failure of the public health measures in place. But this overlooks the role of other factors: whether a COVID-positive person lives with one other person or six, or whether they work in aged care, or from home, where they shop, whether or not they developed symptoms while infected, and whether or not they self-isolated as a result. All of this can make a significant difference to the potential number of others whom they infect with the virus.
It is also harder to trace the contacts of someone working outside the home, compared with someone working from home and only leaving to go to the shops once a week. No two infections are truly equal.
This doesn’t mean we shouldn’t be concerned by a sudden spike in cases, and it doesn’t mean we can’t ask questions about what went wrong. But it also doesn’t mean it necessarily warrants any shift from our current public health measures.
It’s an uncomfortable thought, but luck is a huge part of where we find ourselves today, and where we could be in the future.
The president, Donald Trump, baselessly claimed at a White House press conference on Friday morning, Australian time, that the presidential election has been stolen from him by fraudulent and corrupt electoral processes.
This confronted the television networks, whose job is to report the news, with an acute dilemma.
In an already volatile political atmosphere, do they go on reporting these lies, laced with an undertone of veiled incitement to violence? Or do they cut away on the grounds that by continuing to broadcast this stuff, they are helping to propagate lies and perhaps to oxygenate a threat to the civil peace?
It was not rooted in reality and at this point, where our country is, it’s dangerous.
CNBC presenter, Shepard Smith, said the network was not going to allow it to keep going because what Trump was saying was not true.
CNN and Rupert Murdoch’s Fox News broadcast Trump’s entire press conference but immediately afterwards challenged what he said. CNN’s fact-checker Daniel Dale said it had been the most “dishonest” speech Trump had ever given, with anchor Jake Tapper saying Trump’s statements were “pathetic” and “a feast of falsehoods”.
Fox’s host Martha MacCallum said the supposed evidence and proof of election misconduct would need to be produced.
Even Murdoch’s New York Post, which had endorsed Trump’s re-election, accused him of making “baseless” election fraud claims, quoting a Republican Congressman as saying they were “insane”.
The Washington Post carried two news stories on its front page, clearly calling out Trump’s lies: “Falsehood upon falsehood”; “A speech of historic dishonesty”.
A serious decision to silence the President
But what of the networks’ decision to cut away?
Silencing a public official in the course of his official duties is a very serious abrogation of the media’s duty in a democracy.
But so is allowing the airwaves to be used in such a way as to arouse fears for public confidence in the democratic process and — as MSNBC’s Williams argued — even public safety.
On the run, many of the big networks prioritised public confidence in the democratic process, and public safety, over the reporting of the president’s words.
It is a rare circumstance in any democratic society that the media are placed in the position of having to shoulder such a heavy burden of responsibility.
It is most unlikely that once the present crisis is over, assuming Democrat candidate Joe Biden wins, the American media will find themselves in this position again.
Even so, a Rubicon has been crossed. A president of the United States, a publicly elected official, has been silenced by significant elements of the professional mass media in the course of his public duties.
This was done principally on the grounds he was lying to the people in circumstances where there was a foreseeable risk of serious harm to the body politic, and there was no practicable way to reduce the risk.
Is that a standard the media is prepared to set for the future? If so, it would be giving itself a power that goes well beyond anything the media has claimed for itself up till now.
Journalists need to keep their nerve
In considering this, two questions arise.
What if all media outlets had adopted this course? No one except those at the White House press conference would have known the whole of what Trump said, seen the context and observed the demeanour with which he said it.
Would it have been enough to do as CNN and Fox did — report the speech and then repudiate it?
An answer to that would be: the lies were coming so thick and fast, and were so damaging to the public interest, that it would have been impossible to set the record straight in anything like real time.
Real-time fact-checking is a relatively new development, and a welcome one. But its feasibility should not be a criterion for deciding whether to publish breaking news, unless there is doubt about whether the breaking news is actually happening.
The networks that cut away doubtless acted in good faith to do right by the country. Trump’s speech was shocking and irresponsible.
However, American democracy is in crisis. At this time, above all, the public needs the institution of the fourth estate to keep its nerve and a clear head.
A primary norm of journalism is to inform the public. That certainly means being fair and accurate. But if the news contains lies, the norm is to publish and then call out the lying and set the record straight as soon as possible.
The networks need to explain to their audiences their reasoning behind the decision to cut away, and the media as a whole need to realise that if the norms of journalism break down, that just adds to the tragic chaos into which their country has descended.
On September 5, a coalition of online groups are planning an Australia-wide action called the “Day of Freedom”. The organisers claim hundreds of thousands will join them on the streets in defiance of restrictions on group gatherings and mask-wearing mandates.
Some online supporters believe Stage 5 lockdown will be introduced in Melbourne the following week and the “Day of Freedom” is the last chance for Australians to stand up to an increasingly tyrannical government.
The action is the latest in a series of protests in Australia against the government’s COVID-19 restrictions. The main issues brought up during these protests centre around 5G, government surveillance, freedom of movement and, of course, vaccinations.
And one general conspiracy theory now unites these disparate groups — QAnon.
Since its inception in the US in late 2017, QAnon has morphed beyond a specific, unfounded claim about President Donald Trump working with special counsel Robert Mueller to expose a paedophile ring supposedly run by Bill and Hillary Clinton and the “deep state”. Now, it is an all-encompassing world of conspiracies.
Last week, Facebook deleted over 790 groups, 100 pages and 1,500 ads tied to QAnon and restricted the accounts of hundreds of other Facebook groups and thousands of Instagram accounts. QAnon-related newsfeed rankings and search results were also downgraded.
Facebook is aiming to reduce the organising ability of the QAnon community, but so far such crackdowns seem to have had little effect on the spread of misinformation.
Why Australians are turning to QAnon in large numbers
QAnon encourages people to look for evidence of conspiracies in the media and in government actions. Looking back over the last several years, we can see a range of events or conspiracy theories that have helped QAnon appeal to increasing numbers of followers in Australia.
1) Conspiracies about global governance
In 2015, Senator Malcolm Roberts claimed the UN’s 1992 “Agenda 21” plan for sustainable development as a foreign global plan aimed at depriving nations of their sovereignty and citizens of their property rights.
The belief that “Agenda 21” is a blueprint for corrupt global governance has become a core tenet of QAnon in Australia.
Any talk of “global bankers and cabals” directly taps into longstanding anti-Semitic conspiracies about supposed Jewish world domination often centred on the figure of billionaire George Soros. The pandemic and QAnon have also proven to be fertile ground for neo-Nazis in Australia.
2) Impact of the far-right social media
QAnon has its roots on the far-right bulletin boards of the websites 4Chan and 8Chan. Other campaigns from the same sources, such as the “It’s OK to be White” motion led by One Nation leader Pauline Hanson in the Senate, have been remarkably successful in Australia, showing our susceptibility to viral trolling efforts.
His failure is widely shared in QAnon circles as proof of a cover-up of child abuse at all levels of Australian government. The belief the country is run by a corrupt paedophile cabal is the most fundamental plank of the QAnon platform.
4) Increasingly ‘unaccountable and incompetent’ governments
A number of recent events have eroded public trust in government — from the “sports rorts affair” to the Witness K case — and all serve to further fuel the QAnon suspicion of authority figures.
5) Longstanding alternative health lobbies
Australia’s sizeable anti-vax movement has found great support in the QAnon community. Fear about mandatory vaccinations is widespread, as is a distrust of “big pharma”.
Also, the continuing roll-out of 5G technology throughout the pandemic has confirmed the belief among QAnon followers that there are ulterior motives for the lockdown. Wellness influencers such as celebrity chef Pete Evans have amplified these messages to their millions of followers.
6) The ‘plandemic’ and weaponising of COVID-19
In the QAnon world, debates about the origin of the coronavirus, death rates, definition of cases, testing protocols and possible treatments are underpinned by a belief that governments are covering up the truth. Many believe the virus isn’t real or deadly, or it was deliberately introduced to hasten government control of populations.
Understanding QAnon followers
Understanding why people become part of these movements is the key to stopping the spread of the QAnon virus. Research into extremist groups shows four elements are important:
1) Real or perceived personal and collective grievances
This year, some of these grievances have been linked directly to the pandemic: government lockdown restrictions, a loss of income, fear about the future and disruption of plans such as travel.
2) Networks and personal ties
Social media has given people the ability to find others with similar grievances or beliefs, to share doubts and concerns and to learn about connecting theories and explanations for what may be troubling them.
3) Political and religious ideologies
QAnon is very hierarchically structured, similar to evangelical Christianity. QAnon followers join a select group of truth seekers who are following the “light” and have a duty to wake up the “sheeple”. Like some religions, the QAnon world is welcoming to all and provides a strong sense of community united by a noble purpose and hope for a better future.
4) Enabling environments and support structures
In the QAnon world, spending many hours on social media is valued as doing “research” and seen as an antidote to the so-called fake news of the mainstream media.
Social isolation, a barrage of changing and confusing pandemic news and obliging social media platforms have been a boon for QAnon groups. However, simply banning or deleting groups runs the danger of confirming the beliefs of QAnon followers.
Governments need to be more sensitive in their messaging and avoid triggering panic around sensitive issues such as mandatory or forced vaccinations. Transparency about government actions, policies and mistakes all help to build trust.
Governments also need to ensure they are providing enough resources to support people during this challenging time, particularly when it comes to mental and emotional well-being. Resourcing community-building to counter isolation is vital.
Like many conspiracy theories, there are elements of truth in QAnon. Empathy and compassion, rather than ridicule and ostracism, are the keys to remaining connected to the Q follower in your life. Hopefully, with time, they’ll come back.
Usually, misinformation is focused on specific regions and topics. But COVID-19 is different. For what seems like the first time, both misinformation and fact-checking behaviours are coordinated around a common set of narratives the world over.
In our research, we identified the key trends in both coronavirus misinformation and fact-checking efforts. Using Google’s Fact Check Explorer computing interface we tracked fact-check posts from January to July – with the first checks appearing as early as January 22.
A uniform rate of growth
Our research found the volume of fact-checks on coronavirus misinformation increased steadily in the early stages of the virus’s spread (January and February) and then increased sharply in March and April – when the virus started to spread globally.
Interestingly, we found the same pattern of gradual and then sudden increase even after dividing fact-checks into Spanish, Hindi, Indonesian and Portuguese.
Thus, misinformation and subsequent fact-checking efforts trended in a similar way right across the globe. This is a unique feature of COVID-19.
According to our analysis, there has been no equivalent global trend for other issues such as elections, terrorism, police activity or immigration.
We found the most common narrative worldwide was related to “emergency responses”. These stories reported false information about government or political responses to fighting the virus’s outbreak.
This may be because, unlike narratives surrounding the “nature of the virus”, it is easy to speculate on (and hard to prove) whether people in power have good or ill intent.
Notably, this was also the most common narrative in the US, with an early example being a false rumour the New York Police Department would immediately lock down New York City.
What’s more, a major motivation for spreading misinformation on social media is politics. The US is a polarised political environment, so this might help explain the trend towards political misinformation.
We also found China has more misinformation narratives than any other country. This may be because China is the world’s most populous country.
However, it’s worth noting the main fact-checking website used by the Empirical Studies of Conflict Project for misinformation coming out of China is run by the Chinese Communist Party.
When fighting misinformation, it is important to have as wide a range of independent and transparent fact-checkers as possible. This reduces the potential for bias.
Hydroxychloroquine and other (non) ‘cures’
Another set of misinformation narratives was focused on “false cures” or “false preventative measures”. This was among the most common themes in both China and Australia.
One example was a video that went viral on social media suggesting hydroxychloroquine is an effective coronavirus treatment. This is despite experts stating it is not a proven COVID-19 treatment, and can actually have harmful side effects.
Myths about the “nature of the virus” were also common. These referred to specific characteristics of the virus – such as that it can’t spread on surfaces. We know this isn’t true.
Our analysis found different narratives peaked at different stages of the virus’s spread.
Misinformation about the nature of the virus was prevalent during the outbreak’s early stages, probably spurred by an initial lack of scientific research regarding the nature of the virus.
In contrast, theories relating to emergency responses surfaced later and remain even now, as governments continue to implement measures to fight COVID-19’s spread.
A wide variety of fact-checkers
We also identified greater diversity in websites fact-checking COVID-19 misinformation, compared to those investigating other topics.
Since January, only 25% of 6,000 fact-check posts or articles were published by the top five fact-checking websites (ranked by number of posts). In comparison, 68% of 3,000 climate change fact-checks were published by the top five websites.
If the media is anything to go by, you’d think people who believe coronavirus myths are white, middle-aged women called Karen.
But our new study shows a different picture. We found men and people aged 18-25 are more likely to believe COVID-19 myths. We also found an increase among people from a non-English speaking background.
While we’ve heard recently about the importance of public health messages reaching people whose first language isn’t English, we’ve heard less about reaching young men.
But is this enough when young people are losing interest in COVID-19 news? How many 20-year-old men follow Daniel Andrews on Twitter, or watch Gladys Berejiklian on television?
How can we reach young people?
We need to involve young people in the design of COVID-19 messages to get the delivery right, if we are to convince them to socialise less and follow prevention advice. We need to include them rather than blame them.
We can do this by testing our communications on young people or running consumer focus groups before releasing them to the public. We can include young people on public health communications teams.
We can also borrow strategies from marketing. For example, we know how tobacco companies use social media to effectively target young people. Paying popular influencers on platforms such as TikTok to promote reliable information is one option.
We can target specific communities to reach young men who might not access mainstream media, for instance, gamers who have many followers on YouTube.
We also know humour can be more effective than serious messages to counteract science myths.
Some great examples
There are social media campaigns happening right now to address COVID-19, which might reach more young men than traditional public health methods.
NSW Health has recently started a campaign #Itest4NSW encouraging young people to upload videos to social media in support of COVID-19 testing.
The United Nations is running the global Verified campaign involving an army of volunteers to help spread more reliable information on social media. This may be a way to reach private groups on WhatsApp and Facebook Messenger, where misinformation spreads under the radar.
Telstra is using Australian comedian Mark Humphries to address 5G myths in a satirical way (although this would probably have more credibility if it didn’t come from a vested interest).
Finally, tech companies like Facebook are partnering with health organisations to flag misleading content and prioritise more reliable information. But this is just a start to address the huge problem of misinformation in health.
We can’t expect young men to access reliable COVID-19 messages from people they don’t know, through media they don’t use. To reach them, we need to build new partnerships with the influencers they trust and the social media companies that control their information.
It’s time to change our approach to public health communication, to counteract misinformation and ensure all communities can access, understand and act on reliable COVID-19 prevention advice.
These falsehoods were prominent enough the committee felt compelled to address the issue in its final report. The report noted:
community confidence in 5G has been shaken by extensive misinformation
preying on the fears of the public spread via the internet, and presented as facts, particularly through social media.
This is a remarkable situation for Australian public policy – it is not common for a parliamentary inquiry to have to rebut the dodgy scientific claims it receives in the form of public submissions.
While many Australians might dismiss these claims as fringe conspiracy theories, the reach of this misinformation matters. If enough people act on the basis of these claims, it can cause harm to the wider public.
Conspiracy theories that 5G causes autism, cancer and COVID-19 have also led to widespread arson attacks in the UK and Canada, along with verbal and physical attacks on employees of telecommunication companies.
The majority of submissions were from private citizens. A sizeable number, however, made claims about the health effects of 5G, parroting language from well-known conspiracy theory websites.
A perceived lack of “consent” (for example, here, here and here) about the planned 5G roll-out featured prominently in these submissions. One person argued she did not agree to allow 5G to be “delivered directly into” the home and “radiate” her family.
I then tracked the diffusion of these stories on Facebook and identified 10 public groups where they were posted. Two of the groups specifically targeted Australians – Australians for Safe Technology, a group with 48,000 members, and Australia Uncensored. Many others, such as the popular right-wing conspiracy group QAnon, also contained posts about the 5G debate in Australia.
To determine the similarities in phrasing between the articles posted on these Facebook groups and submissions to the Australian parliamentary committee, I used the same technique to detect similarities in texts that is commonly used to detect plagiarism in student papers.
Today, however, one only needs to create a false social media account and a meme. Misinformation can spread quickly if it is amplified through online trolls and bots.
It can also spread quickly on Facebook, with its algorithm designed to drive ordinary users to extremist groups and pages by exploiting their attraction to divisive content.
And once this manipulative content has been widely disseminated, countering it is like trying to put toothpaste back in the tube.
Misinformation has the potential to undermine faith in governments and institutions and make it more challenging for authorities to make demonstrable improvements in public life. This is why governments need to be more proactive in effectively communicating technical and scientific information, like details about 5G, to the public.
Just as nature abhors a vacuum, a public sphere without trusted voices quickly becomes filled with misinformation.
Maps have shown us how the events of this disastrous year have played out around the globe, from the Australian bushfires to the spread of the COVID-19 pandemic. But there are good reasons to question the maps we see.
Maps often inform our actions, but how do we know which ones are trustworthy? My research shows that answering this question may be critically important for the world’s most urgent challenge: the COVID-19 pandemic.
Maps guide decisions, including those made by governments, private companies, and individual citizens. During the pandemic, government restrictions on activities to protect public health have been strongly informed by maps.
Governments rely on public cooperation with the restrictions, and they have used maps to explain the situation and build trust. If people don’t trust information from the government, they may be less likely to comply with the restrictions.
This highlights the importance of trustworthy COVID-19 maps. Maps can be untrustworthy when they don’t show the most relevant or timely information or because they show information in a misleading way.
Below are a few question you should ask yourself to work out whether you should trust a map you read.
What information is being mapped?
The number of cases of COVID-19 is an important piece of information. But that number could just reflect how many people are being tested. If you don’t know how much testing is being done, you can misjudge the level of risk.
Low case numbers might mean that there isn’t much testing being done. If the percentage of positive cases (positive test rate) is high, we might be missing cases. So not accounting for the number of tests can be misleading.
It’s not just the numbers that matter. How the numbers are shown is also important so that map readers get an accurate picture of what we know.
The Victorian Government recently advised Melburnians to avoid travel to and from several local council areas because of high case numbers. But their publicly available map does not show this clearly.
Compare the government-produced map with a map of the same data mapped differently. Most people interpret light as few cases and dark as more cases. The government-produced map uses dark colours for both low and high numbers of cases.
Who made this map and why did they make it?
Maps can inform, misinform, and disinform, like any other information source. So it is important to pay attention to the map’s context as well as the author.
Viral maps are maps that spread quickly and widely, often via social media. Viral maps cannot always be trusted, even when they come from a reputable source. Maps that are trustworthy in one context may not be in another.
An example from Australian news media in February shows this. Several media outlets showed a map that was tweeted by UK researchers. The tweet announced the publication of their new paper about COVID-19.
The media reported the map showed locations to which COVID-19 had spread from Wuhan, China, the origin of the outbreak. It actually depicted airline flight routes, and was used in the tweet to illustrate how globally linked the world is. The map was from a 2012 study not the 2020 study.
Many readers may have trusted that reporting because their justifiable anxiety about COVID-19 was reinforced by the map’s design choices. The mass of overlapping red symbols creates a powerful and alarming impression.
While the lines in the map indicate potential routes for virus spread, it doesn’t provide evidence that the did virus spread along all of these routes. The researchers didn’t claim that it did. But without understanding why the map was made and what it showed, several media outlets reported it inaccurately.
Maps on social media are especially likely to be missing important context and explanation. The airline route map was re-shared many times as in the tweet below, often without any source information, making it hard to check its trustworthiness.
Limiting the damage done by COVID-19 is a very substantial challenge. Maps can help ordinary citizens to work together with governments to achieve that outcome. But they need to be made and read with care. Ask yourself what is being mapped, how it’s being mapped, who made the map and why they made it.
Digital platforms are now taking more steps to tackle misinformation about COVID-19 on their services. In a joint statement, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have pledged to work together to combat misinformation.
Twitter has also announced a suite of changes to its rules, including updates to how it defines harm so as to address content that goes against authoritative public health information, and an increase in its use of machine learning and automation technologies to detect and remove potentially abusive and manipulative content.
Previous attempts unsuccessful
Unfortunately, Twitter has been unsuccessful in its recent attempts to tackle misinformation (or, more accurately, disinformation – incorrect information posted deliberately with an intent to obfuscate).
The platform has begun to label doctored videos and photos as “manipulated media”. The crucial first test of this initiative was a widely circulated altered video of Democratic presidential candidate Joe Biden, in which part of a sentence was edited out to make it sound as if he was forecasting President Donald Trump’s re-election.
It took Twitter 18 hours to label the video, by which time it had already received 5 million views and 21,000 retweets.
The label appeared below the video (rather than in a more prominent place), and was only visible to the roughly 757,000 accounts who followed the video’s original poster, White House social media director Dan Scavino. Users who saw the content via reweets from the White House (21 million followers) or President Donald Trump (76 million followers), did not see the label.
Labelling misinformation doesn’t work
There are four key reasons why Twitter’s (and other platforms’) attempts to label misinformation were ineffective.
First, social media platforms tend to use automated algorithms for these tasks, because they scale well. But labelling manipulated tweets requires human labour; algorithms cannot decipher complex human interactions. Will social media platforms invest in human labour to solve this issue? The odds arelong.
Second, tweets can be shared millions of times before being labelled. Even if removed, they can easily be edited and then reposted to avoid algorithmic detection.
Third, and more fundamentally, labels may even be counterproductive, serving only to pique the audience’s interest. Conversely, labels may actually amplify misinformation rather than curtailing it.
Finally, the creators of deceptive content can deny their content was an attempt to obfuscate, and claim unfair censorship, knowing that they will find a sympathetic audience within the hyper-partisan arena of social media.
So how can we beat misinformation?
The situation might seem impossible, but there are some practical strategies that the media, social media platforms, and the public can use.
Second, if misinformation has reached the point at which it requires debunking, be sure to stress the facts rather than simply fanning the flames. Refer to experts and trusted sources, and use the “truth sandwich”, in which you state the truth, and then the misinformation, and finally restate the truth again.
Finally, all of us, as social media users, have a crucial role to play in combating misinformation. Before sharing something, think carefully about where it came from. Verify the source and its evidence, double-check with independent other sources, and report suspicious content to the platform directly. Now, more than ever, we need information we can trust.