Why Clive Palmer’s lockdown ads can be rejected by newspapers on ethical grounds


AAP/Jono Searle

Denis Muller, The University of MelbourneClive Palmer’s United Australia Party advertisements inferentially objecting to COVID-19 lockdowns demonstrate one more way in which the freedoms essential to a democracy can be abused to the detriment of the public interest.

Democracies protect freedom of speech, especially political speech, because without it democracy cannot work. When speech is harmful, however, laws and ethical conventions exist to curb it.

The laws regulating political advertising are minimal.

Section 329 of the Commonwealth Electoral Act is confined to the issue of whether a publication is likely to mislead or deceive an elector in relation to the casting of a vote. It has nothing to say about truth in political advertising for the good reason that defining truth in that context would be highly subjective and therefore oppressive.

Sections 52 and 53 of the Trade Practices Act make it an offence for corporations to engage in misleading or deceptive conduct, or to make false or misleading representations. The act has nothing to say about political advertising.

Ad Standards, the industry self-regulator, has a code of ethics that enjoins advertisers not to engage in misleading or deceptive conduct. It is a general rule that applies to all advertising, political or not.

The Palmer ads do not violate any of these provisions.




Read more:
News Corp walks a delicate line on COVID politics


So where does that leave media organisations that receive an approach from the likes of Palmer to publish advertisements the terms of which are not false, misleading or deceptive, but which are clearly designed to undermine public support for public health measures such as lockdowns?

It leaves them having to decide whether to exercise an ethical prerogative.

Short of a legal requirement to do so – say, in settlement of a law suit – no media organisation is obliged to publish an advertisement. It is in almost all cases an ethical decision.

Naturally, freedom of speech imposes a heavy ethical burden to publish, but it is not the only consideration. John Stuart Mill’s harm principle becomes relevant. That principle says the prevention of harm to others is a legitimate constraint on individual freedom.

Undermining public support for public health measures is obviously harmful and against the public interest. Media organisations are entitled to make decisions on ethical bases like this. An example from relatively ancient history will illustrate the point.

In the late 1970s, 4 Corners ran a program alleging that the Utah Development Corporation’s mining activities in Queensland were causing environmental damage. A few days after the program was broadcast, The Sydney Morning Herald received a full-page advertisement from Utah not only repudiating what 4 Corners had said but attacking the professional integrity of the journalists who made the program.

I was chief of staff of the Herald that day and the advertisement was referred to me, partly because it contained the seeds of what might have been a news story and partly because there were concerns it might be defamatory.

I referred it to the executive assistant to the editor, David Bowman, who refused to publish it.

He objected to it not only on legal grounds but on ethical grounds, because it impugned the integrity of the journalists in circumstances where they would have no opportunity to respond. In his view, this was unfair.

A short while later, the advertising people came back saying Utah had offered to indemnify the Herald against any legal damages or costs arising from publication of the advertisement.

Bowman held to his ethical objection and was supported by the general manager, R. P. Falkingham, who said: “You don’t publish something just because a man with a lot of money stands behind you.”

The advertisement did not run, not because of the legal risks but because it would have breached the ethical value of fairness.

Palmer’s ads – which say lockdowns are bad for mental health, bad for jobs and bad for the economy – contain truisms. There is nothing false or misleading about them. But they clearly seek to exploit public resentment about lockdowns for political gain.

The clear intention is to stir up opposition and make the public health orders harder to enforce.




Read more:
Alarmist reporting on COVID-19 will only heighten people’s anxieties and drive vaccine hesitancy


We live in an age where there are not only high levels of public anxiety, but also a great deal of confusion about who to believe on matters such as climate change and the pandemic. It is against the public interest to add gratuitously to that confusion, and harmful to the public welfare to undermine health orders.

These are grounds for rejecting his advertisements.

Nine Entertainment, which publishes The Age, The Sydney Morning Herald and The Australian Financial Review, has rejected Palmer ads that contain misinformation about the pandemic, including about vaccines. Clearly such ads violate the rules against misleading and deceptive content.

But the ads opposing lockdowns on economic or health grounds were initially accepted by Nine, and are still running in News Corporation.

The question now is whether media organisations are willing to make decisions based on ethical considerations that are wider than the narrow standard of deception.The Conversation

Denis Muller, Senior Research Fellow, Centre for Advancing Journalism, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View from The Hill: Barnaby Joyce repudiates Christensen’s COVID misinformation


Michelle Grattan, University of CanberraNationals leader Barnaby Joyce has dissociated himself from the views of his maverick backbencher George Christensen, who on Tuesday flatly rejected measures to contain COVID and played down the seriousness of the disease.

“I don’t agree with him,” Joyce said. “Just because someone has a view, it doesn’t mean it’s my view.” Joyce is personally close to Christensen.

Joyce drew on the experience of his father, who he said had been very involved in the eradication of brucellosis and bovine tuberculosis in northern NSW.

This had been done by large scale vaccination, quarantine, prosecution of people who did not comply with measures, and explanation, Joyce told The Conversation.

“I’m not going to step away from growing up having to deal with those things at an agricultural level. This is how you deal with diseases,” he said.

In a speech delivered just before question time, Christensen asked rhetorically, “How many more freedoms will we lose due to fear of a virus, which is a survivability rate of 997 out of a 1000?”

He said masks didn’t work and lockdowns didn’t work.

“Domestic vaccine passports are a form of discrimination,” he said.

“Nobody should be restricted from everyday life because of their medical choices, especially when vaccinated people can still catch and spread COVID-19.”

“Our posturing politicians, many over there [on the Labor benches], the sensationalist media elite and the dictatorial medical bureaucrats need to recognise these facts and stop spreading fear.

“COVID-19 is going to be with us forever, just like the flu and just like the flu, we will have to live with it, not in constant fear of it. Some people will catch it. Some people will tragically die from it.

“That’s inevitable and we have to accept it. What we should never accept is a systematic removal of our freedoms based on a zero risk health advice from a bunch of unelected medical bureaucrats. Open society back up. Restore our freedoms. End this madness.”

During question time Anthony Albanese, in a neat tactical strike, moved a motion calling on all MPs to “refrain from making ill-informed comments at a time when the pandemic represents a serious threat to the health of Australians”.

The motion also condemned “the comments of the member for Dawson prior to Question Time designed to use our national parliament to spread misinformation and undermine the actions of Australians to defeat COVID”.

Albanese suggested Christensen was able to wag “the National party dog” because Joyce was “quite happy” to let him.

Morrison was in an awkward corner. The government’s usual instinct would be to move to shut Albanese down. But that would have it effectively backing Christensen.

By the same token Morrison did not want to risk giving Christensen the big whack he deserved.

Christensen is a man who enjoys making threats, even if he doesn’t carry them out, and he is not running at the election so has nothing to lose. If he “walked” to the crossbench the government would lose its one seat majority. It has already lost its majority on the floor of the House – when Craig Kelly, another recalcitrant on matters-COVID, defected from the Liberals to the crossbench. .

So the government let the Albanese motion proceed and in his reply to the opposition leader, the PM waved just the smallest of reproving feathers in Christensen’s direction.

After going through what had been done in the pandemic, Morrison said the government “will not support those statements, Mr Speaker, where there is misinformation that is out and about in the community, whether it’s posted, Mr Speaker, on Facebook, or it’s posted in social media, or it’s written in articles or made [in] statements. Whether in this chamber, Mr Speaker, or anywhere else.”

But he wasn’t going to “engage in a partisan debate on this. I am not, Mr Speaker, because what I know is Australians aren’t interested in the politics of COVID.”

Queensland Liberal Warren Entsch wasn’t reluctant to go in hard against Christensen. He told the ABC: “That is the sort of nonsense that I see in protests outside my office from time to time for those with conspiracy theories”. In the parliament “it was resoundingly rejected right across the whole political spectrum – when the motion was put up it was supported, there was not a single dissenter”.

Federal Communications Minister Paul Fletcher repeatedly refused to be drawn when pressed on the ABC on Christensen’s views. But NSW Environment Minister Matt Kean didn’t hold back, saying on the ABC that Christensen “is as qualified to talk about health policy as he is to perform brain surgery”.

Joyce wasn’t in the parliament – he went home at the end of last week and now, with COVID in his electorate of New England, he is confined there.The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How do we counter COVID misinformation? Challenge it directly with the facts



Mary Altaffer/AP

Adam Dunn, University of Sydney

The government is rolling out a new public information campaign this week to reassure the public about the safety of COVID-19 vaccines, which one expert has said “couldn’t be more crucial” to people actually getting the jabs when they are available.

Access to vaccines is the most important barrier to widespread immunisations, so this campaign should go a long way toward getting the right people vaccinated at the right time.

But it also comes as government ministers — and even the prime minister — have refused to address the COVID-19 misinformation coming from those within their own ranks.

Despite advice from the Therapeutic Goods Administration explaining that hydroxychloroquine is not an effective treatment for COVID-19, MP Craig Kelly has continued to promote the opposite on Facebook. A letter he wrote on the same topic, bearing the Commonwealth coat of arms was also widely distributed.

He has also incorrectly advocated the use of the anti-parasitic drug ivermectin as a treatment for COVID-19, and encouraged people to protest against what he called “health bureaucrats in an ivory tower”.

Compared to health experts, politicians and celebrities tend to have access to larger and more diverse audiences, particularly on social media. But politicians and celebrities may not always have the appraisal skills they need to assess clinical evidence.

I spend much of my time examining how researchers introduce biases into the design and reporting of trials and systematic reviews. Kelly probably has less experience in critically appraising trial design and reporting. But if he and I were competing for attention among Australians, his opinions would certainly reach a much larger and varied segment of the population.

Does misinformation really cause harm?

According to a recent Quantum Market Research survey of 1,000 people commissioned by the Department of Health, four in five respondents said they were likely to get a COVID-19 vaccine when it’s made available.

Australia generally has high levels of vaccine confidence compared to other wealthy countries – 72% strongly agree that vaccines are safe and less than 2% strongly disagree.

But there does appear to be some hesitancy about the COVID-19 vaccine. In the Quantum survey, 27% of respondents overall, and 42% of women in their 30s, had concerns about vaccine safety. According to the report, this showed

a need to dispel some specific fears held by certain cohorts of the community in relation to potential adverse side effects.

For other types of COVID misinformation, a University of Sydney study found that younger men had stronger agreement with misconceptions and myths, such as the efficacy of hydroxychloroquine as a treatment, that 5G networks spread the virus or that the virus was engineered in a lab.

Surveys showing how attitudes and beliefs vary by demographics are useful, but it is difficult to know how exposure to misinformation affects the decisions people make about their health in the real world.




Read more:
Laws making social media firms expose major COVID myths could help Australia’s vaccine rollout


Studies measuring what happens to people’s behaviours after misinformation reaches a mainstream audience are rare. One study from 2015 looked at the effect of an ABC Catalyst episode that misrepresented evidence about cholesterol-lowering drugs — it found fewer people filled their statin prescriptions after the show.

When it comes to COVID-19, researchers are only starting to understand the influence of misinformation on people’s behaviours.

After public discussion about using bleach to potentially treat COVID-19, for instance, the number of internet searches about injecting and drinking disinfectants increased. This was followed by a spike in the number of calls to poison control phone lines for disinfectant-related injuries.

As vaccine roll-outs accelerate around the world, concern is growing about vaccine hesitancy among certain groups.
Peter Dejong/AP

Does countering misinformation online work?

The aim of countering misinformation is not to change the opinions of the people posting it, but to reduce misperceptions among the often silent audience. Public health organisations promoting the benefits of vaccinations on social media consider this when they decide to engage with anti-vaccine posts.

A study published this month by two American researchers, Emily Vraga and Leticia Bode, tested the effect of posting an infographic correction in response to misinformation about the science of a false COVID-19 prevention method. They found a bot developed with the World Health Organization and Facebook was able to reduce misperceptions by posting factual responses to misinformation when it appeared.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


A common concern about correcting misinformation in this way is that it might cause a backfire effect, leading people to become more entrenched in misinformed beliefs. But research shows the backfire effect appears to be much rarer than first thought.

Vraga and Bode found no evidence of a backfire effect in their study. Their results suggest that responding to COVID-19 misinformation with factual information is likely to do more good than harm.

So, what’s the best strategy?

Social media platforms can address COVID-19 misinformation by simply removing or labelling posts and deplatforming users who post it.

This is probably most effective in situations where the user posting the misinformation has a small audience. In these cases, responding to misinformation with facts in a more direct way may be a waste of time and could unintentionally amplify the post.

When misinformation is shared by people like Kelly who are in positions of power and influence, removing those posts is like cutting a head off a hydra. It doesn’t stop the spread of misinformation at the source and more of the same will likely fill the void left behind.




Read more:
Most government information on COVID-19 is too hard for the average Australian to understand


In these instances, governments and organisations should consider directly countering misinformation where it occurs. To do this effectively, they need to consider the size of the audience, respond to the misinformation and not the person, and present evidence in simple and engaging ways.

The government’s current campaign fills an important gap in providing simple and clear information about who should get vaccinated and how. It doesn’t directly address the misinformation problem, but I think this would be the wrong place for that kind of effort, anyway.

Instead, research suggests it might be better to directly challenge misinformation where it appears. Rather than demanding the deplatforming of the people who post misinformation, we might instead think of it as an opportunity to correct misperceptions in front of the audiences that really need it.The Conversation

Adam Dunn, Associate professor, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There’s no such thing as ‘alternative facts’. 5 ways to spot misinformation and stop sharing it online



Shutterstock

Mark Pearson, Griffith University

The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.

Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.

We are living in a dangerous age where the internet makes it possible to spread misinformation far and wide and most people lack the basic fact-checking abilities to discern fact from fiction — or, worse, the desire to develop a healthy skepticism at all.




Read more:
Stopping the spread of COVID-19 misinformation is the best 2021 New Year’s resolution


Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.

Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills:

1. Distinguishing verified facts from myths, rumours and opinions

Cold, hard facts are the building blocks for considered and reasonable opinions in politics, media and law.

And there are no such things as “alternative facts” — facts are facts. Just because a falsity has been repeated many times by important people and their affiliates does not make it true.

We cannot expect the average citizen to have the skills of an academic researcher, journalist or judge in determining the veracity of an asserted statement. However, we can teach people some basic strategies before they mistake mere assertions for actual facts.

Does a basic internet search show these assertions have been confirmed by usually reliable sources – such as non-partisan mainstream news organisations, government websites and expert academics?

Students are taught to look to the URL of more authoritative sites — such as .gov or .edu — as a good hint at the factual basis of an assertion.

Searches and hashtags in social media are much less reliable as verification tools because you could be fishing within the “bubble” (or “echo chamber”) of those who share common interests, fears and prejudices – and are more likely to be perpetuating myths and rumours.

2. Mixing up your media and social media diet

We need to be break out of our own “echo chambers” and our tendencies to access only the news and views of those who agree with us, on the topics that interest us and where we feel most comfortable.

For example, over much of the past five years, I have deliberately switched between various conservative and liberal media outlets when something important has happened in the US.

By looking at the coverage of the left- and right-wing media, I can hope to find a common set of facts both sides agree on — beyond the partisan rhetoric and spin. And if only one side is reporting something, I know to question this assertion and not just take it at face value.

3. Being skeptical and assessing the factual premise of an opinion

Journalism students learn to approach the claims of their sources with a “healthy skepticism”. For instance, if you are interviewing someone and they make what seems to be a bold or questionable claim, it’s good practice to pause and ask what facts the claim is based on.

Students are taught in media law this is the key to the fair comment defence to a defamation action. This permits us to publish defamatory opinions on matters of public interest as long as they are reasonably based on provable facts put forth by the publication.

The ABC’s Media Watch used this defence successfully (at trial and on appeal) when it criticised a Sydney Sun-Herald journalist’s reporting that claimed toxic materials had been found near a children’s playground.

This assessment of the factual basis of an opinion is not reserved for defamation lawyers – it is an exercise we can all undertake as we decide whether someone’s opinion deserves our serious attention and republication.




Read more:
Teaching children digital literacy skills helps them navigate and respond to misinformation


4. Exploring the background and motives of media and sources

A key skill in media literacy is the ability to look behind the veil of those who want our attention — media outlets, social media influencers and bloggers — to investigate their allegiances, sponsorships and business models.

For instance, these are some key questions to ask:

  • who is behind that think tank whose views you are retweeting?

  • who owns the online newspaper you read and what other commercial interests do they hold?

  • is your media diet dominated by news produced from the same corporate entity?

  • why does someone need to be so loud or insulting in their commentary; is this indicative of their neglect of important facts that might counter their view?

  • what might an individual or company have to gain or lose by taking a position on an issue, and how might that influence their opinion?

Just because someone has an agenda does not mean their facts are wrong — but it is a good reason to be even more skeptical in your verification processes.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


5. Reflecting and verifying before sharing

We live in an era of instant republication. We immediately retweet and share content we see on social media, often without even having read it thoroughly, let alone having fact-checked it.

Mindful reflection before pressing that sharing button would allow you to ask yourself, “Why am I even choosing to share this material?”

You could also help shore up democracy by engaging in the fact-checking processes mentioned above to avoid being part of the problem by spreading misinformation.The Conversation

Mark Pearson, Professor of Journalism and Social Media, Griffith Centre for Social and Cultural Research, Griffith University, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

To stay or cut away? As Trump makes baseless claims, TV networks are faced with a serious dilemma



Evan Vucci/AP

Denis Muller, University of Melbourne

In the United States, democratic norms are breaking down.

The president, Donald Trump, baselessly claimed at a White House press conference on Friday morning, Australian time, that the presidential election has been stolen from him by fraudulent and corrupt electoral processes.

This confronted the television networks, whose job is to report the news, with an acute dilemma.

In an already volatile political atmosphere, do they go on reporting these lies, laced with an undertone of veiled incitement to violence? Or do they cut away on the grounds that by continuing to broadcast this stuff, they are helping to propagate lies and perhaps to oxygenate a threat to the civil peace?

Major networks tune out

Many of the major networks — MSNBC, NBC News, CNBC, CBS News and ABC News — decided to cut away. So did National Public Radio.

MSNBC presenter Brian Williams said of Trump’s speech:

It was not rooted in reality and at this point, where our country is, it’s dangerous.

CNBC presenter, Shepard Smith, said the network was not going to allow it to keep going because what Trump was saying was not true.

CNN and Rupert Murdoch’s Fox News broadcast Trump’s entire press conference but immediately afterwards challenged what he said. CNN’s fact-checker Daniel Dale said it had been the most “dishonest” speech Trump had ever given, with anchor Jake Tapper saying Trump’s statements were “pathetic” and “a feast of falsehoods”.

Fox’s host Martha MacCallum said the supposed evidence and proof of election misconduct would need to be produced.

Even Murdoch’s New York Post, which had endorsed Trump’s re-election, accused him of making “baseless” election fraud claims, quoting a Republican Congressman as saying they were “insane”.

The Washington Post carried two news stories on its front page, clearly calling out Trump’s lies: “Falsehood upon falsehood”; “A speech of historic dishonesty”.

A serious decision to silence the President

But what of the networks’ decision to cut away?

Silencing a public official in the course of his official duties is a very serious abrogation of the media’s duty in a democracy.

But so is allowing the airwaves to be used in such a way as to arouse fears for public confidence in the democratic process and — as MSNBC’s Williams argued — even public safety.

Donald Trump giving his White House press conference.
Caption text.
Shawn Thew/ EPA

On the run, many of the big networks prioritised public confidence in the democratic process, and public safety, over the reporting of the president’s words.

It is a rare circumstance in any democratic society that the media are placed in the position of having to shoulder such a heavy burden of responsibility.

It is most unlikely that once the present crisis is over, assuming Democrat candidate Joe Biden wins, the American media will find themselves in this position again.

Even so, a Rubicon has been crossed. A president of the United States, a publicly elected official, has been silenced by significant elements of the professional mass media in the course of his public duties.




Read more:
Grattan on Friday: A Biden presidency would put pressure on Scott Morrison over climate change


This was done principally on the grounds he was lying to the people in circumstances where there was a foreseeable risk of serious harm to the body politic, and there was no practicable way to reduce the risk.

Is that a standard the media is prepared to set for the future? If so, it would be giving itself a power that goes well beyond anything the media has claimed for itself up till now.

Journalists need to keep their nerve

In considering this, two questions arise.

What if all media outlets had adopted this course? No one except those at the White House press conference would have known the whole of what Trump said, seen the context and observed the demeanour with which he said it.

Would it have been enough to do as CNN and Fox did — report the speech and then repudiate it?




Read more:
5 types of misinformation to watch out for while ballots are being counted – and after


An answer to that would be: the lies were coming so thick and fast, and were so damaging to the public interest, that it would have been impossible to set the record straight in anything like real time.

Real-time fact-checking is a relatively new development, and a welcome one. But its feasibility should not be a criterion for deciding whether to publish breaking news, unless there is doubt about whether the breaking news is actually happening.

The networks that cut away doubtless acted in good faith to do right by the country. Trump’s speech was shocking and irresponsible.

Trump supporters protest in Detroit.
Trump supporters have taken to the streets since the polls closed on November 3.
Nicole Hester/AP

However, American democracy is in crisis. At this time, above all, the public needs the institution of the fourth estate to keep its nerve and a clear head.

A primary norm of journalism is to inform the public. That certainly means being fair and accurate. But if the news contains lies, the norm is to publish and then call out the lying and set the record straight as soon as possible.

The networks need to explain to their audiences their reasoning behind the decision to cut away, and the media as a whole need to realise that if the norms of journalism break down, that just adds to the tragic chaos into which their country has descended.The Conversation

Denis Muller, Senior Research Fellow, Centre for Advancing Journalism, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Coronavirus misinformation is a global issue, but which myth you fall for likely depends on where you live


Jason Weismueller, University of Western Australia; Jacob Shapiro, Princeton University; Jan Oledan, Princeton University, and Paul Harrigan, University of Western Australia

In February, major social media platforms attended a meeting hosted by the World Health Organisation to address coronavirus misinformation. The aim was to catalyse the fight against what the United Nations has called an “infodemic”.

Usually, misinformation is focused on specific regions and topics. But COVID-19 is different. For what seems like the first time, both misinformation and fact-checking behaviours are coordinated around a common set of narratives the world over.

In our research, we identified the key trends in both coronavirus misinformation and fact-checking efforts. Using Google’s Fact Check Explorer computing interface we tracked fact-check posts from January to July – with the first checks appearing as early as January 22.

Google’s Fact Check Explorer database is connected with a range of fact-checkers, most of which are part of the Poynter Institute’s International Fact-Checking Network.
Screenshot

A uniform rate of growth

Our research found the volume of fact-checks on coronavirus misinformation increased steadily in the early stages of the virus’s spread (January and February) and then increased sharply in March and April – when the virus started to spread globally.

Interestingly, we found the same pattern of gradual and then sudden increase even after dividing fact-checks into Spanish, Hindi, Indonesian and Portuguese.

Thus, misinformation and subsequent fact-checking efforts trended in a similar way right across the globe. This is a unique feature of COVID-19.

According to our analysis, there has been no equivalent global trend for other issues such as elections, terrorism, police activity or immigration.

Different nations, different misconceptions

On March 16, the Empirical Studies of Conflict Project, in collaboration with Microsoft Research, began cataloguing COVID-19 misinformation.

It did this by collating news articles with reporting by a wide range of local fact-checking networks and global groups such as Agence France-Presse and NewsGuard.

We analysed this data set to explore the evolution of specific COVID-19 narratives, with “narrative” referring to the type of story a piece of misinformation pushes.

For instance, one misinformation narrative concerns the “origin of the virus”. This includes the false claim the virus jumped to humans as a result of someone eating bat soup.




Read more:
The Conversation’s FactCheck granted accreditation by International Fact-Checking Network at Poynter


We found the most common narrative worldwide was related to “emergency responses”. These stories reported false information about government or political responses to fighting the virus’s outbreak.

This may be because, unlike narratives surrounding the “nature of the virus”, it is easy to speculate on (and hard to prove) whether people in power have good or ill intent.

Notably, this was also the most common narrative in the US, with an early example being a false rumour the New York Police Department would immediately lock down New York City.

What’s more, a major motivation for spreading misinformation on social media is politics. The US is a polarised political environment, so this might help explain the trend towards political misinformation.

We also found China has more misinformation narratives than any other country. This may be because China is the world’s most populous country.

However, it’s worth noting the main fact-checking website used by the Empirical Studies of Conflict Project for misinformation coming out of China is run by the Chinese Communist Party.

This chart shows the proportion of total misinformation narratives on COVID-19 by the top ten countries between January and July, 2020.

When fighting misinformation, it is important to have as wide a range of independent and transparent fact-checkers as possible. This reduces the potential for bias.

Hydroxychloroquine and other (non) ‘cures’

Another set of misinformation narratives was focused on “false cures” or “false preventative measures”. This was among the most common themes in both China and Australia.

One example was a video that went viral on social media suggesting hydroxychloroquine is an effective coronavirus treatment. This is despite experts stating it is not a proven COVID-19 treatment, and can actually have harmful side effects.

Myths about the “nature of the virus” were also common. These referred to specific characteristics of the virus – such as that it can’t spread on surfaces. We know this isn’t true.




Read more:
We know how long coronavirus survives on surfaces. Here’s what it means for handling money, food and more


Narratives reflect world events

Our analysis found different narratives peaked at different stages of the virus’s spread.

Misinformation about the nature of the virus was prevalent during the outbreak’s early stages, probably spurred by an initial lack of scientific research regarding the nature of the virus.

In contrast, theories relating to emergency responses surfaced later and remain even now, as governments continue to implement measures to fight COVID-19’s spread.

A wide variety of fact-checkers

We also identified greater diversity in websites fact-checking COVID-19 misinformation, compared to those investigating other topics.

Since January, only 25% of 6,000 fact-check posts or articles were published by the top five fact-checking websites (ranked by number of posts). In comparison, 68% of 3,000 climate change fact-checks were published by the top five websites.




Read more:
5 ways to help stop the ‘infodemic,’ the increasing misinformation about coronavirus


It seems resources previously devoted to a wide range of topics are now homing in on coronavirus misinformation. Nonetheless, it’s impossible to know the total volume of this content online.

For now, the best defence is for governments and online platforms to increase awareness about false claims and build on the robust fact-checking infrastructures at our disposal.The Conversation

Jason Weismueller, Doctoral Researcher, University of Western Australia; Jacob Shapiro, Professor of Politics and International Affairs, Princeton University; Jan Oledan, Research Specialist, Princeton University, and Paul Harrigan, Associate Professor of Marketing, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Young men are more likely to believe COVID-19 myths. So how do we actually reach them?



Shutterstock

Carissa Bonner, University of Sydney; Brooke Nickel, University of Sydney, and Kristen Pickles, University of Sydney

If the media is anything to go by, you’d think people who believe coronavirus myths are white, middle-aged women called Karen.

But our new study shows a different picture. We found men and people aged 18-25 are more likely to believe COVID-19 myths. We also found an increase among people from a non-English speaking background.

While we’ve heard recently about the importance of public health messages reaching people whose first language isn’t English, we’ve heard less about reaching young men.




Read more:
We asked multicultural communities how best to communicate COVID-19 advice. Here’s what they told us


What did we find?

Sydney Health Literacy Lab has been running a national COVID-19 survey of more than 1,000 social media users each month since Australia’s first lockdown.

A few weeks in, our initial survey showed younger people and men were more likely to think the benefit of herd immunity was covered up, and the threat of COVID-19 was exaggerated.

People who agreed with such statements were less likely to want to receive a future COVID-19 vaccine.




Read more:
The ‘herd immunity’ route to fighting coronavirus is unethical and potentially dangerous


In June, after restrictions eased, we asked social media users about more specific myths. We found:

  • men and younger people were more likely to believe prevention myths, such as hot temperatures or UV light being able to kill the virus that causes COVID-19

  • people with lower education and more social disadvantage were more likely to believe causation myths, such as 5G being used to spread the virus

  • younger people were more likely to believe cure myths, such as vitamin C and hydroxychloroquine being effective treatments.

We need more targeted research with young Australians, and men in particular, about why some of them believe these myths and what might change their mind.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


Although our research has yet to be formally peer-reviewed, it reflects what other researchers have found, both in Australia and internationally.

An Australian poll in May found similar patterns, in which men and younger people believed a range of myths more than other groups.

In the UK, younger people are more likely to hold conspiracy beliefs about COVID-19. American men are also more likely to agree with COVID-19 conspiracy theories than women.

Why is it important to reach this demographic?

We need to reach young people with health messaging for several reasons. In Australia, young people:

The Victorian and New South Wales premiers have appealed to young people to limit socialising.

But is this enough when young people are losing interest in COVID-19 news? How many 20-year-old men follow Daniel Andrews on Twitter, or watch Gladys Berejiklian on television?

How can we reach young people?

We need to involve young people in the design of COVID-19 messages to get the delivery right, if we are to convince them to socialise less and follow prevention advice. We need to include them rather than blame them.

We can do this by testing our communications on young people or running consumer focus groups before releasing them to the public. We can include young people on public health communications teams.

We can also borrow strategies from marketing. For example, we know how tobacco companies use social media to effectively target young people. Paying popular influencers on platforms such as TikTok to promote reliable information is one option.




Read more:
Most adults have never heard of TikTok. That’s by design


We can target specific communities to reach young men who might not access mainstream media, for instance, gamers who have many followers on YouTube.

We also know humour can be more effective than serious messages to counteract science myths.

Some great examples

There are social media campaigns happening right now to address COVID-19, which might reach more young men than traditional public health methods.

NSW Health has recently started a campaign #Itest4NSW encouraging young people to upload videos to social media in support of COVID-19 testing.

The United Nations is running the global Verified campaign involving an army of volunteers to help spread more reliable information on social media. This may be a way to reach private groups on WhatsApp and Facebook Messenger, where misinformation spreads under the radar.

Telstra is using Australian comedian Mark Humphries to address 5G myths in a satirical way (although this would probably have more credibility if it didn’t come from a vested interest).

Telstra is using comedian Mark Humphries to dispel 5G coronavirus myths.

Finally, tech companies like Facebook are partnering with health organisations to flag misleading content and prioritise more reliable information. But this is just a start to address the huge problem of misinformation in health.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


But we need more

We can’t expect young men to access reliable COVID-19 messages from people they don’t know, through media they don’t use. To reach them, we need to build new partnerships with the influencers they trust and the social media companies that control their information.

It’s time to change our approach to public health communication, to counteract misinformation and ensure all communities can access, understand and act on reliable COVID-19 prevention advice.The Conversation

Carissa Bonner, Research Fellow, University of Sydney; Brooke Nickel, Postdoctoral research fellow, University of Sydney, and Kristen Pickles, Postdoctoral Research Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How misinformation about 5G is spreading within our government institutions – and who’s responsible



Aris Oikonomou/EPA

Michael Jensen, University of Canberra

“Fake news” is not just a problem of misleading or false claims on fringe websites, it is increasingly filtering into the mainstream and has the potential to be deeply destructive.

My recent analysis of more than 500 public submissions to a parliamentary committee on the launch of 5G in Australia shows just how pervasive misinformation campaigns have become at the highest levels of government. A significant number of the submissions peddled inaccurate claims about the health effects of 5G.

These falsehoods were prominent enough the committee felt compelled to address the issue in its final report. The report noted:

community confidence in 5G has been shaken by extensive misinformation
preying on the fears of the public spread via the internet, and presented as facts, particularly through social media.

This is a remarkable situation for Australian public policy – it is not common for a parliamentary inquiry to have to rebut the dodgy scientific claims it receives in the form of public submissions.

While many Australians might dismiss these claims as fringe conspiracy theories, the reach of this misinformation matters. If enough people act on the basis of these claims, it can cause harm to the wider public.

In late May, for example, protests against 5G, vaccines and COVID-19 restrictions were held in Sydney, Melbourne and Brisbane. Some protesters claimed 5G was causing COVID-19 and the pandemic was a hoax – a “plandemic” – perpetuated to enslave and subjugate the people to the state.




Read more:
Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking


Misinformation can also lead to violence. Last year, the FBI for the first time identified conspiracy theory-driven extremists as a terrorism threat.

Conspiracy theories that 5G causes autism, cancer and COVID-19 have also led to widespread arson attacks in the UK and Canada, along with verbal and physical attacks on employees of telecommunication companies.

The source of conspiracy messaging

To better understand the nature and origins of the misinformation campaigns against 5G in Australia, I examined the 530 submissions posted online to the parliament’s standing committee on communications and the arts.

The majority of submissions were from private citizens. A sizeable number, however, made claims about the health effects of 5G, parroting language from well-known conspiracy theory websites.

A perceived lack of “consent” (for example, here, here and here) about the planned 5G roll-out featured prominently in these submissions. One person argued she did not agree to allow 5G to be “delivered directly into” the home and “radiate” her family.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


To connect sentiments like this to conspiracy groups, I looked at two well-known conspiracy sites that have been identified as promoting narratives consistent with Russian misinformation operations – the Centre for Research on Globalization (CRG) and Zero Hedge.

CRG is an organisation founded and directed by Michel Chossudovsky, a former professor at the University of Ottawa and opinion writer for Russia Today.

CRG has been flagged by NATO intelligence as part of wider efforts to undermine trust in “government and public institutions” in North America and Europe.

Zero Hedge, which is registered in Bulgaria, attracts millions of readers every month and ranks among the top 500 sites visited in the US. Most stories are geared toward an American audience.

Researchers at Rand have connected Zero Hedge with online influencers and other media sites known for advancing pro-Kremlin narratives, such as the claim that Ukraine, and not Russia, is to blame for the downing of Malaysia Airlines flight MH17.

Protesters targeting the coronavirus lockdown and 5G in Melbourne in May.
Scott Barbour/AAP

How it was used in parliamentary submissions

For my research, I scoured the top posts circulated by these groups on Facebook for false claims about the health threats posed by 5G. Some stories I found had headlines like “13 Reasons 5G Wireless Technology will be a Catastrophe for Humanity” and “Hundreds of Respected Scientists Sound Alarm about Health Effects as 5G Networks go Global”.

I then tracked the diffusion of these stories on Facebook and identified 10 public groups where they were posted. Two of the groups specifically targeted Australians – Australians for Safe Technology, a group with 48,000 members, and Australia Uncensored. Many others, such as the popular right-wing conspiracy group QAnon, also contained posts about the 5G debate in Australia.




Read more:
Conspiracy theories about 5G networks have skyrocketed since COVID-19


To determine the similarities in phrasing between the articles posted on these Facebook groups and submissions to the Australian parliamentary committee, I used the same technique to detect similarities in texts that is commonly used to detect plagiarism in student papers.

The analysis rates similarities in documents on a scale of 0 (entirely dissimilar) to 1 (exactly alike). There were 38 submissions with at least a 0.5 similarity to posts in the Facebook group 5G Network, Microwave Radiation Dangers and other Health Problems and 35 with a 0.5 similarity to the Australians for Safe Technology group.

This is significant because it means that for these 73 submissions, 50% of the language was, word for word, exactly the same as the posts from extreme conspiracy groups on Facebook.

The first 5G Optus tower in the suburb of Dickson in Canberra.
Mick Tsikas/AAP

The impact of misinformation on policy-making

The process for soliciting submissions to a parliamentary inquiry is an important part of our democracy. In theory, it provides ordinary citizens and organisations with a voice in forming policy.

My findings suggest Facebook conspiracy groups and potentially other conspiracy sites are attempting to co-opt this process to directly influence the way Australians think about 5G.

In the pre-internet age, misinformation campaigns often had limited reach and took a significant amount of time to spread. They typically required the production of falsified documents and a sympathetic media outlet. Mainstream news would usually ignore such stories and few people would ever read them.

Today, however, one only needs to create a false social media account and a meme. Misinformation can spread quickly if it is amplified through online trolls and bots.

It can also spread quickly on Facebook, with its algorithm designed to drive ordinary users to extremist groups and pages by exploiting their attraction to divisive content.

And once this manipulative content has been widely disseminated, countering it is like trying to put toothpaste back in the tube.

Misinformation has the potential to undermine faith in governments and institutions and make it more challenging for authorities to make demonstrable improvements in public life. This is why governments need to be more proactive in effectively communicating technical and scientific information, like details about 5G, to the public.

Just as nature abhors a vacuum, a public sphere without trusted voices quickly becomes filled with misinformation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why is it so hard to stop COVID-19 misinformation spreading on social media?




Tobias R. Keller, Queensland University of Technology and Rosalie Gillett, Queensland University of Technology

Even before the coronavirus arrived to turn life upside down and trigger a global infodemic, social media platforms were under growing pressure to curb the spread of misinformation.

Last year, Facebook cofounder and chief executive Mark Zuckerberg called for new rules to address “harmful content, election integrity, privacy and data portability”.

Now, amid a rapidly evolving pandemic, when more people than ever are using social media for news and information, it is more crucial than ever that people can trust this content.




Read more:
Social media companies are taking steps to tamp down coronavirus misinformation – but they can do more


Digital platforms are now taking more steps to tackle misinformation about COVID-19 on their services. In a joint statement, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have pledged to work together to combat misinformation.

Facebook has traditionally taken a less proactive approach to countering misinformation. A commitment to protecting free expression has led the platform to allow misinformation in political advertising.

More recently, however, Facebook’s spam filter inadvertently marked legitimate news information about COVID-19 as spam. While Facebook has since fixed the mistake, this incident demonstrated the limitations of automated moderation tools.

In a step in the right direction, Facebook is allowing national ministries of health and reliable organisations to advertise accurate information on COVID-19 free of charge. Twitter, which prohibits political advertising, is allowing links to the Australian Department of Health and World Health Organization websites.

Twitter is directing users to trustworthy information.
Twitter.com

Twitter has also announced a suite of changes to its rules, including updates to how it defines harm so as to address content that goes against authoritative public health information, and an increase in its use of machine learning and automation technologies to detect and remove potentially abusive and manipulative content.

Previous attempts unsuccessful

Unfortunately, Twitter has been unsuccessful in its recent attempts to tackle misinformation (or, more accurately, disinformation – incorrect information posted deliberately with an intent to obfuscate).

The platform has begun to label doctored videos and photos as “manipulated media”. The crucial first test of this initiative was a widely circulated altered video of Democratic presidential candidate Joe Biden, in which part of a sentence was edited out to make it sound as if he was forecasting President Donald Trump’s re-election.

A screenshot of the tweet featuring the altered video of Joe Biden, with Twitter’s label.
Twitter

It took Twitter 18 hours to label the video, by which time it had already received 5 million views and 21,000 retweets.

The label appeared below the video (rather than in a more prominent place), and was only visible to the roughly 757,000 accounts who followed the video’s original poster, White House social media director Dan Scavino. Users who saw the content via reweets from the White House (21 million followers) or President Donald Trump (76 million followers), did not see the label.

Labelling misinformation doesn’t work

There are four key reasons why Twitter’s (and other platforms’) attempts to label misinformation were ineffective.

First, social media platforms tend to use automated algorithms for these tasks, because they scale well. But labelling manipulated tweets requires human labour; algorithms cannot decipher complex human interactions. Will social media platforms invest in human labour to solve this issue? The odds are long.

Second, tweets can be shared millions of times before being labelled. Even if removed, they can easily be edited and then reposted to avoid algorithmic detection.

Third, and more fundamentally, labels may even be counterproductive, serving only to pique the audience’s interest. Conversely, labels may actually amplify misinformation rather than curtailing it.

Finally, the creators of deceptive content can deny their content was an attempt to obfuscate, and claim unfair censorship, knowing that they will find a sympathetic audience within the hyper-partisan arena of social media.

So how can we beat misinformation?

The situation might seem impossible, but there are some practical strategies that the media, social media platforms, and the public can use.

First, unless the misinformation has already reached a wide audience, avoid drawing extra attention to it. Why give it more oxygen than it deserves?

Second, if misinformation has reached the point at which it requires debunking, be sure to stress the facts rather than simply fanning the flames. Refer to experts and trusted sources, and use the “truth sandwich”, in which you state the truth, and then the misinformation, and finally restate the truth again.

Third, social media platforms should be more willing to remove or restrict unreliable content. This might include disabling likes, shares and retweets for particular posts, and banning users who repeatedly misinform others.

For example, Twitter recently removed coronavirus misinformation posted by Rudy Guilani and Charlie Kirk; the Infowars app was removed from Google’s app store; and probably with the highest impact, Facebook, Twitter, and Google’s YouTube removed corona misinformation from Brasil’s president Jair Bolsonaro.




Read more:
Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots


Finally, all of us, as social media users, have a crucial role to play in combating misinformation. Before sharing something, think carefully about where it came from. Verify the source and its evidence, double-check with independent other sources, and report suspicious content to the platform directly. Now, more than ever, we need information we can trust.The Conversation

Tobias R. Keller, Visiting Postdoc, Queensland University of Technology and Rosalie Gillett, Research Associate in Digital Platform Regulation, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We’re in danger of drowning in a coronavirus ‘infodemic’. Here’s how we can cut through the noise



Paul Hanaoka/Unsplash

Connal Lee, University of South Australia

The novel coronavirus that has so far killed more than 1,100 people now has a name – COVID-19.

The World Health Organisation (WHO) didn’t want the name to refer to a place, animal or certain group of people and needed something pronounceable and related to the disease.

“Having a name matters to prevent the use of other names that can be inaccurate or stigmatising,” said WHO director-general Tedros Adhanom Ghebreyesus.

The organisation has been battling misinformation about the coronavirus, with some experts warning rumours are spreading more rapidly than the disease itself.




Read more:
Coronavirus fears: Should we take a deep breath?


The WHO describes the overabundance of information about the coronavirus as an “infodemic”. Some information is accurate, but much of it isn’t – and it can be difficult to tell what’s what.

What’s the problem?

Misinformation can spread unnecessary fear and panic. During the 2014 Ebola outbreak, rumours about the disease led to panic-buying, with many people purchasing Ebola virus protection kits online. These contained hazmat suits and face masks, which were unnecessary for protection against the disease.

As we’ve seen with the coronavirus, misinformation can prompt blame and stigmatisation of infected and affected groups. Since the outbreak began, Chinese Australians, who have no connection or exposure to the virus, have reported an increase in anti-Chinese language and abuse both online and on the streets.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


Misinformation can also undermine people’s willingness to follow legitimate public health advice. In extreme cases, people don’t acknowledge the disease exists, and fail to take proven precautionary measures.

In other cases, people may not seek help due to fears, misconceptions or a lack of trust in authorities.

The public may also grow bored or apathetic due to the sheer quantity of information out there.

Mode of transmission

The internet can be an ally in the fight against infectious diseases. Accurate messages about how the disease spreads and how to protect yourself and others can be distributed promptly and accessibly.

But inaccurate information spreads rapidly online. Users can find themselves inside echo chambers, embracing implausible conspiracy theories and ultimately distrusting those in charge of the emergency response.

The infodemic continues offline as information spreads via mobile phone, traditional media and in the work tearoom.

Previous outbreaks show authorities need to respond to misinformation quickly and effectively, while remaining aware that not everybody will believe the official line.

Responding to the infodemic

Last week, rumours emerged that the coronavirus was transmitted through infectious clouds in the air that people could inhale.

The WHO promptly responded to these claims, noting this was not the case. WHO’s Director of Global Infectious Hazard Preparedness, Sylvie Briand, explained:

Currently the virus is transmitted through droplets and you need a close contact to be infected.

This simple intervention demonstrates how a timely response be effective. However, it may not convince everyone.




Read more:
Coronavirus fears: Should we take a deep breath?


Official messages need to be consistent to avoid confusion and information overload. However, coordination can be difficult, as we’ve seen this week.

Potentially overly optimistic predictions have come from Chinese health officials saying the outbreak will be over by April. Meanwhile, the WHO has given dire warnings, saying the virus poses a bigger threat than terrorism.

These inconsistencies can be understandable as governments try to placate fears while the WHO encourages us to prepare for the worst.

Health authorities should keep reiterating key messages, like the importance of regularly washing your hands. This is a simple and effective measure that helps people feel in control of their own protection. But it can be easily forgotten in a sea of information.

It’s worth reminding people to regularly wash their hands.
CDC/Unsplash

A challenge is that authorities may struggle to compete with the popularity of sensationalist stories and conspiracy theories about how diseases emerge, spread and what authorities are doing in response. Conspiracies may be more enjoyable than the official line, or may help some people preserve their existing, problematic beliefs.

Sometimes a prompt response won’t successfully cut through this noise.

Censorship isn’t the answer

Although censoring a harmful view could limit its spread, it could also make that view popular. Hiding negative news or over-reassuring people can leave them vulnerable and unprepared.

Censorship and media silence during the 1918 Spanish flu, which included not releasing numbers of affected and dead, undercut the seriousness of the pandemic.

When the truth emerges, people lose trust in public institutions.

Past outbreaks illustrate that building trust and legitimacy is vital to get people to adhere to disease prevention and control measures such as quarantines. Trying to mitigate fear through censorship is problematic.

Saving ourselves from drowning in a sea of (mis)information

The internet is useful for monitoring infectious diseases outbreaks. Tracking keyword searches, for example, can detect emerging trends.

Observing online communication offers an opportunity to quickly respond to misunderstandings and to build a picture of what rumours gain the most traction.

Health authorities’ response to the infodemic should include a strategy for engaging with and even listening to those who spread or believe inaccurate stories to gain deeper understanding of how infodemics spread.The Conversation

Connal Lee, Associate Lecturer, Philosophy, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.