Coronavirus misinformation is a global issue, but which myth you fall for likely depends on where you live


Jason Weismueller, University of Western Australia; Jacob Shapiro, Princeton University; Jan Oledan, Princeton University, and Paul Harrigan, University of Western Australia

In February, major social media platforms attended a meeting hosted by the World Health Organisation to address coronavirus misinformation. The aim was to catalyse the fight against what the United Nations has called an “infodemic”.

Usually, misinformation is focused on specific regions and topics. But COVID-19 is different. For what seems like the first time, both misinformation and fact-checking behaviours are coordinated around a common set of narratives the world over.

In our research, we identified the key trends in both coronavirus misinformation and fact-checking efforts. Using Google’s Fact Check Explorer computing interface we tracked fact-check posts from January to July – with the first checks appearing as early as January 22.

Google’s Fact Check Explorer database is connected with a range of fact-checkers, most of which are part of the Poynter Institute’s International Fact-Checking Network.
Screenshot

A uniform rate of growth

Our research found the volume of fact-checks on coronavirus misinformation increased steadily in the early stages of the virus’s spread (January and February) and then increased sharply in March and April – when the virus started to spread globally.

Interestingly, we found the same pattern of gradual and then sudden increase even after dividing fact-checks into Spanish, Hindi, Indonesian and Portuguese.

Thus, misinformation and subsequent fact-checking efforts trended in a similar way right across the globe. This is a unique feature of COVID-19.

According to our analysis, there has been no equivalent global trend for other issues such as elections, terrorism, police activity or immigration.

Different nations, different misconceptions

On March 16, the Empirical Studies of Conflict Project, in collaboration with Microsoft Research, began cataloguing COVID-19 misinformation.

It did this by collating news articles with reporting by a wide range of local fact-checking networks and global groups such as Agence France-Presse and NewsGuard.

We analysed this data set to explore the evolution of specific COVID-19 narratives, with “narrative” referring to the type of story a piece of misinformation pushes.

For instance, one misinformation narrative concerns the “origin of the virus”. This includes the false claim the virus jumped to humans as a result of someone eating bat soup.




Read more:
The Conversation’s FactCheck granted accreditation by International Fact-Checking Network at Poynter


We found the most common narrative worldwide was related to “emergency responses”. These stories reported false information about government or political responses to fighting the virus’s outbreak.

This may be because, unlike narratives surrounding the “nature of the virus”, it is easy to speculate on (and hard to prove) whether people in power have good or ill intent.

Notably, this was also the most common narrative in the US, with an early example being a false rumour the New York Police Department would immediately lock down New York City.

What’s more, a major motivation for spreading misinformation on social media is politics. The US is a polarised political environment, so this might help explain the trend towards political misinformation.

We also found China has more misinformation narratives than any other country. This may be because China is the world’s most populous country.

However, it’s worth noting the main fact-checking website used by the Empirical Studies of Conflict Project for misinformation coming out of China is run by the Chinese Communist Party.

This chart shows the proportion of total misinformation narratives on COVID-19 by the top ten countries between January and July, 2020.

When fighting misinformation, it is important to have as wide a range of independent and transparent fact-checkers as possible. This reduces the potential for bias.

Hydroxychloroquine and other (non) ‘cures’

Another set of misinformation narratives was focused on “false cures” or “false preventative measures”. This was among the most common themes in both China and Australia.

One example was a video that went viral on social media suggesting hydroxychloroquine is an effective coronavirus treatment. This is despite experts stating it is not a proven COVID-19 treatment, and can actually have harmful side effects.

Myths about the “nature of the virus” were also common. These referred to specific characteristics of the virus – such as that it can’t spread on surfaces. We know this isn’t true.




Read more:
We know how long coronavirus survives on surfaces. Here’s what it means for handling money, food and more


Narratives reflect world events

Our analysis found different narratives peaked at different stages of the virus’s spread.

Misinformation about the nature of the virus was prevalent during the outbreak’s early stages, probably spurred by an initial lack of scientific research regarding the nature of the virus.

In contrast, theories relating to emergency responses surfaced later and remain even now, as governments continue to implement measures to fight COVID-19’s spread.

A wide variety of fact-checkers

We also identified greater diversity in websites fact-checking COVID-19 misinformation, compared to those investigating other topics.

Since January, only 25% of 6,000 fact-check posts or articles were published by the top five fact-checking websites (ranked by number of posts). In comparison, 68% of 3,000 climate change fact-checks were published by the top five websites.




Read more:
5 ways to help stop the ‘infodemic,’ the increasing misinformation about coronavirus


It seems resources previously devoted to a wide range of topics are now homing in on coronavirus misinformation. Nonetheless, it’s impossible to know the total volume of this content online.

For now, the best defence is for governments and online platforms to increase awareness about false claims and build on the robust fact-checking infrastructures at our disposal.The Conversation

Jason Weismueller, Doctoral Researcher, University of Western Australia; Jacob Shapiro, Professor of Politics and International Affairs, Princeton University; Jan Oledan, Research Specialist, Princeton University, and Paul Harrigan, Associate Professor of Marketing, University of Western Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Young men are more likely to believe COVID-19 myths. So how do we actually reach them?



Shutterstock

Carissa Bonner, University of Sydney; Brooke Nickel, University of Sydney, and Kristen Pickles, University of Sydney

If the media is anything to go by, you’d think people who believe coronavirus myths are white, middle-aged women called Karen.

But our new study shows a different picture. We found men and people aged 18-25 are more likely to believe COVID-19 myths. We also found an increase among people from a non-English speaking background.

While we’ve heard recently about the importance of public health messages reaching people whose first language isn’t English, we’ve heard less about reaching young men.




Read more:
We asked multicultural communities how best to communicate COVID-19 advice. Here’s what they told us


What did we find?

Sydney Health Literacy Lab has been running a national COVID-19 survey of more than 1,000 social media users each month since Australia’s first lockdown.

A few weeks in, our initial survey showed younger people and men were more likely to think the benefit of herd immunity was covered up, and the threat of COVID-19 was exaggerated.

People who agreed with such statements were less likely to want to receive a future COVID-19 vaccine.




Read more:
The ‘herd immunity’ route to fighting coronavirus is unethical and potentially dangerous


In June, after restrictions eased, we asked social media users about more specific myths. We found:

  • men and younger people were more likely to believe prevention myths, such as hot temperatures or UV light being able to kill the virus that causes COVID-19

  • people with lower education and more social disadvantage were more likely to believe causation myths, such as 5G being used to spread the virus

  • younger people were more likely to believe cure myths, such as vitamin C and hydroxychloroquine being effective treatments.

We need more targeted research with young Australians, and men in particular, about why some of them believe these myths and what might change their mind.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


Although our research has yet to be formally peer-reviewed, it reflects what other researchers have found, both in Australia and internationally.

An Australian poll in May found similar patterns, in which men and younger people believed a range of myths more than other groups.

In the UK, younger people are more likely to hold conspiracy beliefs about COVID-19. American men are also more likely to agree with COVID-19 conspiracy theories than women.

Why is it important to reach this demographic?

We need to reach young people with health messaging for several reasons. In Australia, young people:

The Victorian and New South Wales premiers have appealed to young people to limit socialising.

But is this enough when young people are losing interest in COVID-19 news? How many 20-year-old men follow Daniel Andrews on Twitter, or watch Gladys Berejiklian on television?

How can we reach young people?

We need to involve young people in the design of COVID-19 messages to get the delivery right, if we are to convince them to socialise less and follow prevention advice. We need to include them rather than blame them.

We can do this by testing our communications on young people or running consumer focus groups before releasing them to the public. We can include young people on public health communications teams.

We can also borrow strategies from marketing. For example, we know how tobacco companies use social media to effectively target young people. Paying popular influencers on platforms such as TikTok to promote reliable information is one option.




Read more:
Most adults have never heard of TikTok. That’s by design


We can target specific communities to reach young men who might not access mainstream media, for instance, gamers who have many followers on YouTube.

We also know humour can be more effective than serious messages to counteract science myths.

Some great examples

There are social media campaigns happening right now to address COVID-19, which might reach more young men than traditional public health methods.

NSW Health has recently started a campaign #Itest4NSW encouraging young people to upload videos to social media in support of COVID-19 testing.

The United Nations is running the global Verified campaign involving an army of volunteers to help spread more reliable information on social media. This may be a way to reach private groups on WhatsApp and Facebook Messenger, where misinformation spreads under the radar.

Telstra is using Australian comedian Mark Humphries to address 5G myths in a satirical way (although this would probably have more credibility if it didn’t come from a vested interest).

Telstra is using comedian Mark Humphries to dispel 5G coronavirus myths.

Finally, tech companies like Facebook are partnering with health organisations to flag misleading content and prioritise more reliable information. But this is just a start to address the huge problem of misinformation in health.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


But we need more

We can’t expect young men to access reliable COVID-19 messages from people they don’t know, through media they don’t use. To reach them, we need to build new partnerships with the influencers they trust and the social media companies that control their information.

It’s time to change our approach to public health communication, to counteract misinformation and ensure all communities can access, understand and act on reliable COVID-19 prevention advice.The Conversation

Carissa Bonner, Research Fellow, University of Sydney; Brooke Nickel, Postdoctoral research fellow, University of Sydney, and Kristen Pickles, Postdoctoral Research Fellow, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How misinformation about 5G is spreading within our government institutions – and who’s responsible



Aris Oikonomou/EPA

Michael Jensen, University of Canberra

“Fake news” is not just a problem of misleading or false claims on fringe websites, it is increasingly filtering into the mainstream and has the potential to be deeply destructive.

My recent analysis of more than 500 public submissions to a parliamentary committee on the launch of 5G in Australia shows just how pervasive misinformation campaigns have become at the highest levels of government. A significant number of the submissions peddled inaccurate claims about the health effects of 5G.

These falsehoods were prominent enough the committee felt compelled to address the issue in its final report. The report noted:

community confidence in 5G has been shaken by extensive misinformation
preying on the fears of the public spread via the internet, and presented as facts, particularly through social media.

This is a remarkable situation for Australian public policy – it is not common for a parliamentary inquiry to have to rebut the dodgy scientific claims it receives in the form of public submissions.

While many Australians might dismiss these claims as fringe conspiracy theories, the reach of this misinformation matters. If enough people act on the basis of these claims, it can cause harm to the wider public.

In late May, for example, protests against 5G, vaccines and COVID-19 restrictions were held in Sydney, Melbourne and Brisbane. Some protesters claimed 5G was causing COVID-19 and the pandemic was a hoax – a “plandemic” – perpetuated to enslave and subjugate the people to the state.




Read more:
Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking


Misinformation can also lead to violence. Last year, the FBI for the first time identified conspiracy theory-driven extremists as a terrorism threat.

Conspiracy theories that 5G causes autism, cancer and COVID-19 have also led to widespread arson attacks in the UK and Canada, along with verbal and physical attacks on employees of telecommunication companies.

The source of conspiracy messaging

To better understand the nature and origins of the misinformation campaigns against 5G in Australia, I examined the 530 submissions posted online to the parliament’s standing committee on communications and the arts.

The majority of submissions were from private citizens. A sizeable number, however, made claims about the health effects of 5G, parroting language from well-known conspiracy theory websites.

A perceived lack of “consent” (for example, here, here and here) about the planned 5G roll-out featured prominently in these submissions. One person argued she did not agree to allow 5G to be “delivered directly into” the home and “radiate” her family.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


To connect sentiments like this to conspiracy groups, I looked at two well-known conspiracy sites that have been identified as promoting narratives consistent with Russian misinformation operations – the Centre for Research on Globalization (CRG) and Zero Hedge.

CRG is an organisation founded and directed by Michel Chossudovsky, a former professor at the University of Ottawa and opinion writer for Russia Today.

CRG has been flagged by NATO intelligence as part of wider efforts to undermine trust in “government and public institutions” in North America and Europe.

Zero Hedge, which is registered in Bulgaria, attracts millions of readers every month and ranks among the top 500 sites visited in the US. Most stories are geared toward an American audience.

Researchers at Rand have connected Zero Hedge with online influencers and other media sites known for advancing pro-Kremlin narratives, such as the claim that Ukraine, and not Russia, is to blame for the downing of Malaysia Airlines flight MH17.

Protesters targeting the coronavirus lockdown and 5G in Melbourne in May.
Scott Barbour/AAP

How it was used in parliamentary submissions

For my research, I scoured the top posts circulated by these groups on Facebook for false claims about the health threats posed by 5G. Some stories I found had headlines like “13 Reasons 5G Wireless Technology will be a Catastrophe for Humanity” and “Hundreds of Respected Scientists Sound Alarm about Health Effects as 5G Networks go Global”.

I then tracked the diffusion of these stories on Facebook and identified 10 public groups where they were posted. Two of the groups specifically targeted Australians – Australians for Safe Technology, a group with 48,000 members, and Australia Uncensored. Many others, such as the popular right-wing conspiracy group QAnon, also contained posts about the 5G debate in Australia.




Read more:
Conspiracy theories about 5G networks have skyrocketed since COVID-19


To determine the similarities in phrasing between the articles posted on these Facebook groups and submissions to the Australian parliamentary committee, I used the same technique to detect similarities in texts that is commonly used to detect plagiarism in student papers.

The analysis rates similarities in documents on a scale of 0 (entirely dissimilar) to 1 (exactly alike). There were 38 submissions with at least a 0.5 similarity to posts in the Facebook group 5G Network, Microwave Radiation Dangers and other Health Problems and 35 with a 0.5 similarity to the Australians for Safe Technology group.

This is significant because it means that for these 73 submissions, 50% of the language was, word for word, exactly the same as the posts from extreme conspiracy groups on Facebook.

The first 5G Optus tower in the suburb of Dickson in Canberra.
Mick Tsikas/AAP

The impact of misinformation on policy-making

The process for soliciting submissions to a parliamentary inquiry is an important part of our democracy. In theory, it provides ordinary citizens and organisations with a voice in forming policy.

My findings suggest Facebook conspiracy groups and potentially other conspiracy sites are attempting to co-opt this process to directly influence the way Australians think about 5G.

In the pre-internet age, misinformation campaigns often had limited reach and took a significant amount of time to spread. They typically required the production of falsified documents and a sympathetic media outlet. Mainstream news would usually ignore such stories and few people would ever read them.

Today, however, one only needs to create a false social media account and a meme. Misinformation can spread quickly if it is amplified through online trolls and bots.

It can also spread quickly on Facebook, with its algorithm designed to drive ordinary users to extremist groups and pages by exploiting their attraction to divisive content.

And once this manipulative content has been widely disseminated, countering it is like trying to put toothpaste back in the tube.

Misinformation has the potential to undermine faith in governments and institutions and make it more challenging for authorities to make demonstrable improvements in public life. This is why governments need to be more proactive in effectively communicating technical and scientific information, like details about 5G, to the public.

Just as nature abhors a vacuum, a public sphere without trusted voices quickly becomes filled with misinformation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why is it so hard to stop COVID-19 misinformation spreading on social media?




Tobias R. Keller, Queensland University of Technology and Rosalie Gillett, Queensland University of Technology

Even before the coronavirus arrived to turn life upside down and trigger a global infodemic, social media platforms were under growing pressure to curb the spread of misinformation.

Last year, Facebook cofounder and chief executive Mark Zuckerberg called for new rules to address “harmful content, election integrity, privacy and data portability”.

Now, amid a rapidly evolving pandemic, when more people than ever are using social media for news and information, it is more crucial than ever that people can trust this content.




Read more:
Social media companies are taking steps to tamp down coronavirus misinformation – but they can do more


Digital platforms are now taking more steps to tackle misinformation about COVID-19 on their services. In a joint statement, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter, and YouTube have pledged to work together to combat misinformation.

Facebook has traditionally taken a less proactive approach to countering misinformation. A commitment to protecting free expression has led the platform to allow misinformation in political advertising.

More recently, however, Facebook’s spam filter inadvertently marked legitimate news information about COVID-19 as spam. While Facebook has since fixed the mistake, this incident demonstrated the limitations of automated moderation tools.

In a step in the right direction, Facebook is allowing national ministries of health and reliable organisations to advertise accurate information on COVID-19 free of charge. Twitter, which prohibits political advertising, is allowing links to the Australian Department of Health and World Health Organization websites.

Twitter is directing users to trustworthy information.
Twitter.com

Twitter has also announced a suite of changes to its rules, including updates to how it defines harm so as to address content that goes against authoritative public health information, and an increase in its use of machine learning and automation technologies to detect and remove potentially abusive and manipulative content.

Previous attempts unsuccessful

Unfortunately, Twitter has been unsuccessful in its recent attempts to tackle misinformation (or, more accurately, disinformation – incorrect information posted deliberately with an intent to obfuscate).

The platform has begun to label doctored videos and photos as “manipulated media”. The crucial first test of this initiative was a widely circulated altered video of Democratic presidential candidate Joe Biden, in which part of a sentence was edited out to make it sound as if he was forecasting President Donald Trump’s re-election.

A screenshot of the tweet featuring the altered video of Joe Biden, with Twitter’s label.
Twitter

It took Twitter 18 hours to label the video, by which time it had already received 5 million views and 21,000 retweets.

The label appeared below the video (rather than in a more prominent place), and was only visible to the roughly 757,000 accounts who followed the video’s original poster, White House social media director Dan Scavino. Users who saw the content via reweets from the White House (21 million followers) or President Donald Trump (76 million followers), did not see the label.

Labelling misinformation doesn’t work

There are four key reasons why Twitter’s (and other platforms’) attempts to label misinformation were ineffective.

First, social media platforms tend to use automated algorithms for these tasks, because they scale well. But labelling manipulated tweets requires human labour; algorithms cannot decipher complex human interactions. Will social media platforms invest in human labour to solve this issue? The odds are long.

Second, tweets can be shared millions of times before being labelled. Even if removed, they can easily be edited and then reposted to avoid algorithmic detection.

Third, and more fundamentally, labels may even be counterproductive, serving only to pique the audience’s interest. Conversely, labels may actually amplify misinformation rather than curtailing it.

Finally, the creators of deceptive content can deny their content was an attempt to obfuscate, and claim unfair censorship, knowing that they will find a sympathetic audience within the hyper-partisan arena of social media.

So how can we beat misinformation?

The situation might seem impossible, but there are some practical strategies that the media, social media platforms, and the public can use.

First, unless the misinformation has already reached a wide audience, avoid drawing extra attention to it. Why give it more oxygen than it deserves?

Second, if misinformation has reached the point at which it requires debunking, be sure to stress the facts rather than simply fanning the flames. Refer to experts and trusted sources, and use the “truth sandwich”, in which you state the truth, and then the misinformation, and finally restate the truth again.

Third, social media platforms should be more willing to remove or restrict unreliable content. This might include disabling likes, shares and retweets for particular posts, and banning users who repeatedly misinform others.

For example, Twitter recently removed coronavirus misinformation posted by Rudy Guilani and Charlie Kirk; the Infowars app was removed from Google’s app store; and probably with the highest impact, Facebook, Twitter, and Google’s YouTube removed corona misinformation from Brasil’s president Jair Bolsonaro.




Read more:
Meet ‘Sara’, ‘Sharon’ and ‘Mel’: why people spreading coronavirus anxiety on Twitter might actually be bots


Finally, all of us, as social media users, have a crucial role to play in combating misinformation. Before sharing something, think carefully about where it came from. Verify the source and its evidence, double-check with independent other sources, and report suspicious content to the platform directly. Now, more than ever, we need information we can trust.The Conversation

Tobias R. Keller, Visiting Postdoc, Queensland University of Technology and Rosalie Gillett, Research Associate in Digital Platform Regulation, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

We’re in danger of drowning in a coronavirus ‘infodemic’. Here’s how we can cut through the noise



Paul Hanaoka/Unsplash

Connal Lee, University of South Australia

The novel coronavirus that has so far killed more than 1,100 people now has a name – COVID-19.

The World Health Organisation (WHO) didn’t want the name to refer to a place, animal or certain group of people and needed something pronounceable and related to the disease.

“Having a name matters to prevent the use of other names that can be inaccurate or stigmatising,” said WHO director-general Tedros Adhanom Ghebreyesus.

The organisation has been battling misinformation about the coronavirus, with some experts warning rumours are spreading more rapidly than the disease itself.




Read more:
Coronavirus fears: Should we take a deep breath?


The WHO describes the overabundance of information about the coronavirus as an “infodemic”. Some information is accurate, but much of it isn’t – and it can be difficult to tell what’s what.

What’s the problem?

Misinformation can spread unnecessary fear and panic. During the 2014 Ebola outbreak, rumours about the disease led to panic-buying, with many people purchasing Ebola virus protection kits online. These contained hazmat suits and face masks, which were unnecessary for protection against the disease.

As we’ve seen with the coronavirus, misinformation can prompt blame and stigmatisation of infected and affected groups. Since the outbreak began, Chinese Australians, who have no connection or exposure to the virus, have reported an increase in anti-Chinese language and abuse both online and on the streets.




Read more:
Coronavirus fears can trigger anti-Chinese prejudice. Here’s how schools can help


Misinformation can also undermine people’s willingness to follow legitimate public health advice. In extreme cases, people don’t acknowledge the disease exists, and fail to take proven precautionary measures.

In other cases, people may not seek help due to fears, misconceptions or a lack of trust in authorities.

The public may also grow bored or apathetic due to the sheer quantity of information out there.

Mode of transmission

The internet can be an ally in the fight against infectious diseases. Accurate messages about how the disease spreads and how to protect yourself and others can be distributed promptly and accessibly.

But inaccurate information spreads rapidly online. Users can find themselves inside echo chambers, embracing implausible conspiracy theories and ultimately distrusting those in charge of the emergency response.

The infodemic continues offline as information spreads via mobile phone, traditional media and in the work tearoom.

Previous outbreaks show authorities need to respond to misinformation quickly and effectively, while remaining aware that not everybody will believe the official line.

Responding to the infodemic

Last week, rumours emerged that the coronavirus was transmitted through infectious clouds in the air that people could inhale.

The WHO promptly responded to these claims, noting this was not the case. WHO’s Director of Global Infectious Hazard Preparedness, Sylvie Briand, explained:

Currently the virus is transmitted through droplets and you need a close contact to be infected.

This simple intervention demonstrates how a timely response be effective. However, it may not convince everyone.




Read more:
Coronavirus fears: Should we take a deep breath?


Official messages need to be consistent to avoid confusion and information overload. However, coordination can be difficult, as we’ve seen this week.

Potentially overly optimistic predictions have come from Chinese health officials saying the outbreak will be over by April. Meanwhile, the WHO has given dire warnings, saying the virus poses a bigger threat than terrorism.

These inconsistencies can be understandable as governments try to placate fears while the WHO encourages us to prepare for the worst.

Health authorities should keep reiterating key messages, like the importance of regularly washing your hands. This is a simple and effective measure that helps people feel in control of their own protection. But it can be easily forgotten in a sea of information.

It’s worth reminding people to regularly wash their hands.
CDC/Unsplash

A challenge is that authorities may struggle to compete with the popularity of sensationalist stories and conspiracy theories about how diseases emerge, spread and what authorities are doing in response. Conspiracies may be more enjoyable than the official line, or may help some people preserve their existing, problematic beliefs.

Sometimes a prompt response won’t successfully cut through this noise.

Censorship isn’t the answer

Although censoring a harmful view could limit its spread, it could also make that view popular. Hiding negative news or over-reassuring people can leave them vulnerable and unprepared.

Censorship and media silence during the 1918 Spanish flu, which included not releasing numbers of affected and dead, undercut the seriousness of the pandemic.

When the truth emerges, people lose trust in public institutions.

Past outbreaks illustrate that building trust and legitimacy is vital to get people to adhere to disease prevention and control measures such as quarantines. Trying to mitigate fear through censorship is problematic.

Saving ourselves from drowning in a sea of (mis)information

The internet is useful for monitoring infectious diseases outbreaks. Tracking keyword searches, for example, can detect emerging trends.

Observing online communication offers an opportunity to quickly respond to misunderstandings and to build a picture of what rumours gain the most traction.

Health authorities’ response to the infodemic should include a strategy for engaging with and even listening to those who spread or believe inaccurate stories to gain deeper understanding of how infodemics spread.The Conversation

Connal Lee, Associate Lecturer, Philosophy, University of South Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Messianic Jews in Israel Seek Public Apology for Attack


Christians await court decision on assaults on services by ultra-orthodox Jews.

ISTANBUL, April 23 (CDN) — After a final court hearing in Israel last week, a church of Messianic Jews awaits a judge’s decision that could force an ultra-orthodox Jewish  organization to publicly apologize to them for starting a riot and ransacking a baptismal service.

A ruling in favor of the Christian group would mark the first time an organization opposing Messianic Jews in Israel has had to apologize to its victims for religious persecution.

In 2006 Howard Bass, pastor of Yeshua’s Inheritance church, filed suit against Yehuda Deri, chief Sephardic rabbi in the city of Beer Sheva, and Yad L’Achim, an organization that fights against Messianic Jews, for allegedly inciting a riot at a December 2005 service that Bass was leading.

Bass has demanded either a public apology for the attack or 1.5 million shekels (US$401,040) from the rabbi and Yad L’Achim.

The case, Bass said, was ultimately about “defending the name of Yeshua [Jesus]” and making sure that Deri, the leadership of Yad L’Achim and those that support them know they have to obey the law and respect the right of people to worship.

“They are trying to get away from having any responsibility,” Bass said.

On Dec. 24, 2005, during a baptismal service in Beer Sheva, a group of about 200 men pushed their way into a small, covered structure being used to baptize two believers and tried to stop the service. Police were called to the scene but could not control the crowd.

Once inside the building, the assailants tossed patio chairs, damaged audiovisual equipment, threw a grill and other items into a baptismal pool, and then pushed Bass into the pool and broke his glasses.

“Their actions were violent actions without regard [for injury],” Bass said.

In the days before the riot, Yad L’Achim had issued notices to people about a “mass baptism” scheduled to take place at the facility in the sprawling city of 531,000 people 51 miles (83 kilometers) southwest of Jerusalem. In the days after the riot, Deri bragged about the incident on a radio talk show, including a boast that Bass had been “baptized” at the gathering.

The 2005 incident wasn’t the first time the church had to deal with a riotous attack after Yad L’Achim disseminated false information about their activities. On Nov. 28, 1998, a crowd of roughly 1,000 protestors broke up a Yeshua’s Inheritance service after the anti-Christian group spread a rumor that three busloads of kidnapped Jewish minors were being brought in for baptism. The assailants threw rocks, spit on parishioners and attempted to seize some of their children, Bass said.

In response to the 1998 attack and to what Bass described as a public, cavalier attitude about the 2005 attack, Bass and others in the Messianic community agreed that he needed to take legal action.

“What is happening here has happened to Jews throughout the centuries,” Bass said about persecution of Messianic Jews in Israel, adding that many in movements opposed to Messianic Jews in Israel are “arrogant.” He compared their attitudes to the attitudes that those in Hamas, a Palestinian group dedicated to the destruction of the State of Israel, have toward Israelis in general.

“They say, ‘Recognize us, but we will never recognize you,’” Bass said.

Long Battle

Bass has fought against the leadership of Yad L’Achim and Deri for four years through his attorneys, Marvin Kramer and Kevork Nalbandian. But throughout the process, Kramer said, the two defendants have refused to offer a genuine apology for the misinformation that led to the 2005 riot or for the riot itself.

Kramer said Bass’s legal team would offer language for an acceptable public apology, and attorneys for the defendants in turn would offer language that amounted to no real apology at all.

“We made several attempts to make a compromise, but we couldn’t do it,” Kramer said.  “What we were really looking for was a public apology, and they weren’t ready to give a public apology. If we would have gotten the public apology, we would have dropped the lawsuit at any point.”

Despite several attempts to reach Yad L’Achim officials at both their U.S. and Israeli offices, no one would comment.

The hearing on April 15 was the final chance the parties had to come to an agreement; the judge has 30 days to give a ruling. His decision will be issued by mail.

Kramer declined to speculate on what the outcome of the case will be, but he said he had “proved what we needed to prove to be successful.”

Belief in Israel

Bass said he is a strong supporter of Israel but is critical of the way Messianic Jews are treated in the country.

“Israel opposes the gospel, and these events show this to be true,” he said. Referring to Israel, Bass paraphrased Stephen, one of Christianity’s early martyrs, “‘You always resist the Spirit of God.’ What Stephen said was true.”

Kramer said that the lawsuit is not against the State of Israel or the Jewish people, but rather for freedom of religion.

“It has to do with a violation of rights of individuals to worship in accordance with the basic tenants of their faith and to practice their faith in accordance with their beliefs in accordance with law,” he said.

Terrorist Organization?

Bass’ lawsuit is just one of many legal troubles Yad L’Achim is facing. In February, the Jerusalem Institute of Justice (JIJ), a civil rights advocacy group, filed a petition asking Attorney General Yehuda Weinstein to declare Yad L’Achim a terrorist organization and order that it be dismantled.

In the 24-page document Caleb Myers, an attorney for JIJ, outlined numerous incidences in which Yad L’Achim or those linked with it had “incited hatred, racism, violence and terror.” The document cited instances of persecution against Christians, as well as kidnappings of Jewish women from their Arab partners.

“Israel is a ‘Jewish and democratic’ state, while the actions of Yad L’Achim are not consistent with either the noble values of Judaism or the values of democracy,” the petition read. “Not to mention the fact that it is a country that arose on the ashes of a people that was persecuted for its religion, and has resolved since its establishment to bear the standard of full equality, without discrimination on the basis of gender, race, religion or nationality.”

According to the document, Yad L’Achim went after people it viewed as enemies of ultra-orthodox Judaism. The group particularly targeted Messianic Jews and other Christians.

“Yad L’Achim refers to ‘missionary activity’ as if it was the worst of criminal offenses and often arouses fear of this activity,” the document read. “It should be noted that in the State of Israel there is no prohibition against ‘missionary activity’ as the dissemination of religion and/or faith among members of other religions/faiths, unless such activity solicits religious conversion, as stated in various sections of the Penal Code, which bans the solicitation of religious conversion among minors, or among adults by offering bribes. Furthermore, the organization often presents anyone belonging to the Christian religion, in all its forms, as a ‘missionary,’ even if he does not work to spread his religion.”

Particularly damning in the document was reported testimony gleaned from Jack Teitel. Teitel, accused of planting a bomb on March 20, 2008 that almost killed the teenage son of a Messianic Jewish pastor, told authorities that he worked with Yad L’Achim.

“He was asked to talk about his activity in Yad L’Achim and related that for some five years he was active in the organization, and on average he helped to rescue about five women each year,” the document read, using the Yad L’Achim term “rescue” to refer to kidnapping.

The 2008 bombing severely injured Ami Ortiz, then 15, but after 20 months he had largely recovered.

Teitel, who said Ortiz family members were “missionaries trying to capture weak Jews,” has been indicted on two cases of pre-meditated murder, three cases of attempted murder, carrying a weapon, manufacturing a weapon, possession of illegal weapons and incitement to commit violence.

In interviews with the Israeli media, Yad L’Achim Chairman Rabbi Shalom Dov Lifshitz said his organization wasn’t connected with the attacks of the Ortiz family or with Teitel.

Report from Compass Direct News

Violent Death of Girl in Pakistan Spurs Push for Justice


Rare protest by family of tortured child puts spotlight on abuse of Christian working poor.

LAHORE, Pakistan, January 28 (CDN) — A daring protest and a high-profile funeral here on Monday (Jan. 25) for a 12-year-old Christian girl who died from torture and malnourishment has cast a rare spotlight on abuse of the Christian poor in Pakistan.

In an uncommon challenge in the predominantly Muslim nation, the Christian parents of Shazia Bashir Masih protested police unresponsiveness to the alleged violence against their daughter by Muslim attorney Chaudhary Muhammad Naeem and his family and his attempt to buy their silence after her death. The house servant died on Friday (Jan. 22) after working eight months in Naeem’s house.

An initial medical report indicated she died gradually from blows from a blunt instrument, wounds from a sharp-edged weapon, misuse of medicines and malnourishment. Key media highlighted the case on Pakistan’s airwaves, and minority rights groups along with high-ranking Christian politicians have swooped in to help.

Initially police were unresponsive to the family’s efforts to file charges against Muslim attorney Naeem, and on Saturday (Jan. 23) they staged a protest in front of the Punjab Assembly. The power of Naeem, a former president of the Lahore Bar Association, was such that officers at Litton Road police station refused to listen to Shazia’s relatives when they tried to file a complaint to retrieve her three months ago, telling the girl’s relatives, “a case against a lawyer cannot be registered,” her uncle Rafiq Masih told Compass.

Her mother, Nasreen Bibi, told Compass Naeem came to their home on the day Shazia died and offered 30,000 rupees (US$350) to keep the death secret and to pay for burial expenses.

“I refused to accept their offer, and they went they went away hurling death threats,” she said.

Bibi, a widow who subsequently married a 70-year-old blind man, told Compass that hunger and poverty had forced her to send her daughter to work at Naeem’s house for 1,000 rupees per month (US$12) – the family’s only source of income. Two older daughters are married, and she still cares for a 10-year-old daughter and 8-year-old son living at home.

Rafiq Masih said Naeem illegally kept Shazia at his house, forced her to work long hours and summarily refused family requests to see her. Three months ago, Masih said, Naeem allowed him and Shazia’s mother to see her for five minutes, and the girl complained that Naeem and his son were raping her. Shazia also told them that Naeem, his wife and sister-in-law were beating her and threatening to harm her if she tried to escape.

Enraged, Naeem promptly asked him and Shazia’s mother to leave, Masih said.

“We tried to bring Shazia with us back home,” he said, “but Naeem flatly refused to let Shazia go, and he cruelly and inhumanely grabbed her hair and dragged her inside the house. He returned to threaten us with dire consequences if we tried to file a case against him for keeping Shazia at his home as a bonded laborer.”

Masih and Bibi then went to the Litton Road police station to try to get Naeem to release Shazia, and it was then that duty officers deliberately offered the misinformation that a case could not be made against a lawyer, they said.

A Muslim neighbor of Naeem, Shaukat Ali Agha, told Compass that Naeem tortured Shazia.

“Often that little girl’s cries for mercy could be heard from the residence of the lawyer during the dead of night,” Agha said. “And whenever Shazia requested some food, she got thrashed badly by his wife, son and sister-in-law. One day Shazia was viciously beaten when, forced by starvation, she could not resist picking up a small piece of sugar cane from the lawn of Naeem’s residence to chew.”

As Shazia’s condition deteriorated, Naeem released her to the family and they took her to Jinnah Hospital Lahore on Jan. 19. After fighting for her life there for three days, she succumbed to her injuries and critically malnourished condition, her mother said.

Doctors at the hospital told Compass they found 18 wounds on her body: 13 from a blunt instrument, and five from a “sharp-edged weapon.”

A high-ranking investigating official told Compass that Naeem had given contrary statements under questioning. The police official said that Naeem initially stated that Shazia had fallen down some stairs and died. The police official, who spoke on condition of anonymity, said Naeem quickly changed his statement, saying she had stolen food from the refrigerator and therefore was beaten. The official added that Naeem also said Shazia was insane, disobedient and stubborn, and “therefore she had gotten thrashed and died.”

Doctors at Mayo Hospital Morgue have taken blood and tissue samples from Shazia’s liver, stomach and kidneys and sent them to the Chief Chemical Examiner’s Forensic Lab in Islamabad to determine the official causes of death, officials said.

Family Beaten in Court

On Saturday (Jan. 23) Shazia’s family, along with many other Christians and Muslims, protested outside the Punjab Assembly for three hours, according to rights groups. Key television channels covered police inaction in the face of the violent death, and several high-profile politicians pledged their support, including Pakistani President Asif Ali Zardari. He promised to give the family 500,000 rupees (US$5,835) after Pakistani Minister of Minorities Affairs Shahbaz Bhatti announced a gift of the same amount to compensate the family.

Only after this public pressure did police file a First Information Report, and Naeem and six others, including family members, were arrested earlier this week. Chief Minister of Punjab Shahbaz Sharif reportedly visited the family, promising justice.

The Lahore High Court took up the case on Tuesday (Jan. 26) and ordered police to conclude investigations within 14 days, but none of the high-level action seemed to matter at a hearing that day at District and Sessions Court Lahore, at which Naeem and his accusers were present. As routinely happens in cases where Christians in Pakistan accuse Muslims of wrongdoing, Compass observed as Naeem’s lawyers chanted slogans against Shazia’s family, threatened them and beat them – including Bibi and her blind husband – driving them from the courtroom.

Compass witnessed the Muslim attorneys yelling chants against local media and Christianity, as well. Naeem was neither handcuffed nor escorted by Defense A-Division Police, though he has been charged with murder.

At Shazia’s funeral on Monday at Sacred Heart Cathedral Church, Bishop of Lahore Diocese the Rt. Rev. Alexander John Malik officiated as eminent Christian politicians, human rights activists, Christian clergymen and many others gathered to pay their respects amid heavy police contingents.

After the funeral, her body was taken to her home in the Sammanabad slum of Arriya Nagar, where a throng of neighbors and Christian mourners gathered, chanting for justice. Shazia’s coffin was then taken to Miani Sahib Christian Cemetery, where she was buried amid cries and tears.

Present at the burial ceremonies were Provincial Minister of Punjab for Minorities Affairs Kamran Michael, Federal Minister for Minorities Affairs Bhatti, Christian members of Punjab Parliament Tahir Naveed Chaudhary and Khalil Tahir Sindhu, Bishop Albert Javed, Bishop Samuel Azariah, National Director of the Center for Legal Aid Assistance and Settlement Joseph Francis and other Christian leaders.

In a joint statement issued that day in Lahore, Catholic Archbishop Lawrence John Saldanha and Peter Jacob, executive secretary of the National Council for Justice and Peace, said that Shazia’s death was not an isolated incident, but that violence against the more than 10 million child laborers in the country is commonplace.

Report from Compass Direct News 

TURKEY: CHRISTIAN BOOKSHOP IN ADANA VANDALIZED


Second attack within one week follows threats from Muslim nationalists.

ISTANBUL, February 17 (Compass Direct News) – Following threats from Muslim nationalists, a Turkish Bible Society bookshop in the southern city of Adana was vandalized for the second time in a week on Thursday (Feb. 12).

Security camera footage shows two youths attacking the storefront of the Soz Kitapevi bookshop, kicking and smashing glass in both the window and the door. The door frame was also damaged.

Bookshop employee Dogan Simsek discovered the damage when he arrived to open the shop. He described security footage of the attack, which took place at 8:19 a.m., to Compass.

“They came at it like a target,” he said. “They attacked in a very cold-blooded manner, and then they walked away as if nothing had happened.”

The security camera did not clearly capture the faces of either youth, and police are still attempting to identify the perpetrators.

During the first attack on Feb. 7, the glass of the front door was smashed and the security camera mangled. Both have since been repaired.

Simsek told the Turkish national daily Milliyet that these are the first such incidents he has witnessed in the 10 years he has worked there.

“We sit and drink tea with our neighbors and those around us; there are no problems in that regard,” said Simsek, though he did acknowledge that local opinion is not all favorable. “This is a Muslim neighborhood, and many have told us not to sell these books.”

The bookshop has received threats from both Muslim hardliners and nationalists. Last November, a man entered the shop and began making accusations that the Soz Kitapevi bookshop was in league with the CIA, saying, “You work with them killing people in Muslim countries, harming Muslim countries.”

 

Systemic Prejudice

The attacks are another example of the animosity that Turkish Christians have faced recently, especially the small Protestant community. The Alliance of Protestant Churches of Turkey released its annual Rights Violations Summary last month, detailing some of the abuses faced by Protestant congregations in 2008.

The report makes it clear that violent attacks, threats and accusations are symptoms arising from an anti-Christian milieu of distrust and misinformation that the Turkish state allows to exist.

The report cites both negative portrayal in the media and state bodies or officials that “have created a ‘crime’ entitled ‘missionary activities,’ identifying it with a certain faith community” as being primarily responsible for the enmity felt towards Christians.

It urges the government to develop effective media watchdog mechanisms to ensure the absence of intolerant or inflammatory programs, and that the state help make the public aware of the rights of Turkish citizens of all faiths.

Report from Compass Direct News