A high school student from Ohio made national headlines recently by getting inoculated despite his family’s anti-vaccination beliefs.
Ethan Lindenberger, 18, who never had been vaccinated, had begun to question his parents’ decision not to immunize him. He went online to research and ask questions, posting to Reddit, a social discussion website, about how to be vaccinated. His online quest went viral.
In March 2019, he was invited to testify before a U.S. Senate Committee hearing on vaccines and preventable disease outbreaks. In his testimony, he said that his mother’s refusal to vaccinate him was informed partly by her online research and the misinformation about vaccines she found on the web.
Lindenberger’s mother is hardly alone. Public health experts have blamed online anti-vaccination discussions in part for New York’s worst measles outbreak in 30 years. Anti-vaccine activists also have been cited for the growth of anti-vaccination sentiments in the U.S. and abroad.
We are associate professors who study health communication. We are also parents who read online vaccination-related posts, and we decided to conduct research to better understand people’s communication behaviors related to childhood vaccinations. Our research examined the voices most central to this discussion online, mothers, and our findings show that those who oppose vaccinations communicate most about this issue.
What prompts mothers to speak out
A strong majority of parents in the U.S. support vaccinations, yet at the same time, anti-vaccination rates in the U.S. and globally are rising. The World Health Organization identified the reluctance or refusal to vaccinate despite the availability of vaccines as one of 10 top threats to global health in 2019.
Mothers are critical decision-makers in determining whether their children should be vaccinated. In our study, we surveyed 455 mothers online to determine who communicates most about vaccinations and why.
In general, previous research has shown that people evaluate opinion climates – what the majority opinion seems to say – before expressing their own ideas about issues. This is true particularly on controversial subjects such as affirmative action, abortion or immigration. If an individual perceives their opinion to be unpopular, they may be less likely to say what they think, especially if an issue receives a lot of media attention, a phenomenon known as the spiral of silence.
If individuals, however, have strong beliefs about an issue, they may express their opinions whether they are commonly held or minority perspectives. These views can dominate conversations as others online find support for their views and join in.
Our recent study found that mothers who contributed information online shared several perspectives. Mothers who didn’t strongly support childhood vaccinations were more likely to seek, pay attention to, forward information and speak out about the issue – compared to those who do support childhood vaccinations.
Those who believed that vaccinations were an important issue (whether they were for or against them) were more likely to express an opinion. And those who opposed vaccinations were more likely to post their beliefs online.
How social media skews facts
Online news content can be influenced by social media information that millions of people read, and it can amplify minority opinions and health myths. For example, Twitter and Reddit posts related to the vaccine-autism myth can drive news coverage.
Those who expressed online opinions about vaccinations also drove news coverage. Other research we co-authored shows that posts related to the vaccine-autism myth were followed by online news stories related to tweets in the U.S., Canada and the U.K.
Recent reports about social media sites, such as Facebook, trying to interrupt false health information from spreading can help correct public misinformation. However, it is unclear what types of communication will counter misinformation and myths that are repeated and reinforced online.
Our work suggests that those who agree with the scientific facts about vaccination may not feel the need to pay attention to this issue or voice their opinions online. They likely already have made up their minds and vaccinated their children.
But from a health communication perspective, it is important that parents who support vaccination voice their opinions and experiences, particularly in online environments.
Studies show that how much parents trust or distrust doctors, scientists or the government influences where they land in the vaccination debate. Perspectives of other parents also provide a convincing narrative to understand the risks and benefits of vaccination.
Scientific facts and messaging about vaccines, such as information from organizations like the World Health Organization and the Centers for Disease Control and Prevention, are important in the immunization debate.
But research demonstrates that social consensus, informed in part by peers and other parents, is also an effective element in conversations that shape decisions.
If mothers or parents who oppose or question vaccinations continue to communicate, while those who support vaccinations remain silent, a false consensus may grow. This could result in more parents believing that a reluctance to vaccinate children is the norm – not the exception.
With concerns growing worldwide about the economic power of digital technology giants such as Google and Facebook, there was plenty of interest internationally in Australia’s Digital Platforms Inquiry.
The Australian Competition and Consumer Commission (ACCC) inquiry was seen as undertaking a forensic account of market dominance by digital platforms, and the implications for Australian media and the rights of citizens around privacy and data protection.
But the major limitation facing the ACCC, and the Australian government, in developing new regulations for digital platforms is jurisdictional authority – given these companies are headquartered in the United States.
More ‘platform neutral’ approach
Among the ACCC’s 23 recommendations is a proposal to reform media regulations to move from the current platform-specific approaches (different rules for television, radio, and print media) towards a “platform-neutral” approach.
This will ensure comparable functions are effectively and consistently regulated:
Digitalisation and the increase in online sources of news and media content highlight inconsistencies in the current sector-specific approach to media regulation in Australia […]
Digital platforms increasingly perform similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online. Despite this, virtually no media regulation applies to digital platforms.
The ACCC’s recommendations to harmonise regulations across different types of media draw on major Australian public enquiries from the early 2010s, such as the Convergence Review and the Australian Law Reform Commission’s review of the national media classification system. These reports identified the inappropriateness of “silo-ised” media laws and regulations in an age of digital convergence.
The ACCC also questions the continued appropriateness of the distinction between platforms and publishers in an age where the largest digital platforms are not simply the carriers of messages circulated among their users.
The report observes that such platforms are increasingly at the centre of digital content distribution. Online consumers increasingly access social news through platforms such as Facebook and Google, as well as video content through YouTube.
The advertising dollar
While the ACCC inquiry focused on the impact of digital platforms on news, we can see how they have transformed the media landscape more generally, and where issues of the wider public good arise.
Their dominance over advertising has undercut traditional media business models. Online now accounts for about 50% of total advertising spend, and the ACCC estimates that 71 cents of every dollar spent on digital advertising in Australia goes to Google or Facebook.
All media are now facing the implications of a more general migration to online advertising, as platforms can better micro-target consumers rather than relying on the broad brush approach of mass media advertising.
The larger issue facing potential competitors to the digital giants is the accumulation of user data. This includes the lack of transparency around algorithmic sorting of such data, and the capacity to use machine learning to apply powerful predictive analytics to “big data”.
It’s also concerned the “winner-takes-most” nature of digital markets creates a long term structural crisis for media businesses, with particularly severe implications for public interest journalism.
Digital platform companies do not sit easily within a recognisable industry sector as they branch across information technology, content media, and advertising.
They’re also not alike. While all rely on the capacity to generate and make use of consumer data, their business models differ significantly.
The ACCC chose to focus only on Google and Facebook, but they are quite different entities.
Google dominates search advertising and is largely a content aggregator, whereas Facebook for the most part provides display advertising that accompanies user-generated social media. This presents its own challenges in crafting a regulatory response to the rise of these digital platform giants.
A threshold issue is whether digital platforms should be understood to be media businesses, or businesses in a more generic sense.
Communications policy in the 1990s and 2000s commonly differentiated digital platforms as carriers. This indemnified them from laws and regulations relating to content that users uploaded onto their sites.
But this carriage/content distinction has always coexisted with active measures on the part of the platform companies to manage content that is hosted on their sites. Controversies around content moderation, and the legal and ethical obligations of platform providers, have accelerated greatly in recent years.
To the degree that companies such as Google and Facebook increasingly operate as media businesses, this would bring aspects of their activities within the regulatory purview of the Australian Communication and Media Authority (ACMA).
The ACCC recommended ACMA should be responsible for brokering a code of conduct governing commercial relationships between the digital platforms and news providers.
This would give it powers related to copyright enforcement, allow it to monitor how platforms are acting to guarantee the trustworthiness and reliability of news content, and minimise the circulation of “fake news” on their sites.
Overseas, but over here
Companies such as Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.
The capacity to address competition and market dominance issues is limited by the fact real action could only meaningfully occur in their home market of the US.
Australian regulators are going to need to work closely with their counterparts in other countries and regions: the US and the European Union are the two most significant in this regard.
As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.
In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.
New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.
But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.
With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.
As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.
After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.
Both platforms have made public statements about their efforts at moderation.
YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]
Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:
- the length of time it was available on Facebook’s platform before it was removed
- the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.
These issues illustrate the weaknesses of existing content moderation policies and practices.
Not an easy task
Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.
When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.
People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.
We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.
And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.
For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.
We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.
We need a new approach
Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.
A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:
- companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
- companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
- companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.
A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.
In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.
The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.
In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.
The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.
In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.
In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.
Performance crime is about notoriety
There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.
In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.
American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:
didn’t just want to commit murder – he wanted the reward of attention, for having done it.
Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.
Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.
Steve Stephens, for example, linked his murder of a random elderly victim to retribution for his own failed relationship. He shot the stranger point-blank on video. Vester Flanagan’s appalling murder of two journalists seems to have been motivated by his anger at being fired from the same network.
The Christchurch attack was a brutal, planned mass murder of Muslims in New Zealand, but we don’t yet know whether it was about communicating the ideology of a specific group.
Even though it’s easy to identify explicit references to white supremacist ideas, the document is also strewn with confusing and inexplicable internet meme references and red herrings. These could be regarded as trolling attempts to bait the public into interrogating his claims, and magnifying the attention paid to the perpetrator and his gruesome killings.
How we should respond
While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.
That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.
It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.
Telecommunication service Vodafone moved quickly to block New Zealand users from access to sites that would be likely to distribute the video.
The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.
Simply watching the video can cause trauma
Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.
Watching images like these has the potential to provoke vicarious trauma in viewers. Studies since the September 11 attacks suggest that “distant trauma” can be linked to multiple viewings of distressing media images.
While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.
Rewarding the spectacle
Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.
According to British criminologist Majid Yar, the new media environment has created different genres of performance crime. The performances have increased in intensity, and criminality – from so-called “happy slapping” videos circulated among adolescents, to violent sexual assault videos. The recent attack is a terrifying continuation of this trend, which is predicated on a kind of exhibitionism and desire to be identified as the performer of the violence.
Researcher Jane O’Dea, who has studied the role played by the media environment in school shootings, claims that we exist in:
a society of the spectacle that regularly transforms ordinary people into “stars” of reality television or of websites like Facebook or YouTube.
Perpetrators of performance crime are inspired by the attention that will inevitably result from the online archive they create leading up to, and during, the event.
We all have a role to play
I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.
Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.
But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.
We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.
Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.
Graham Mudd, Facebook’s product marketing director, said in a statement:
We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.
Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.
The invisible industry worth billions
In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.
The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.
These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.
Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.
In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.
A growing concern
Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.
The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.
Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.
As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.
With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.
More transparency and more respect for users
To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.
Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.
In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.
In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.
Time to regulate
Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.
Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.
Facebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.
Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.
Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.
In some ways, social media has been a boon for human rights – most obviously for freedom of speech.
Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.
But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.
Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.
Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.
The bad and the ugly
But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.
Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.
YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.
The business model and human rights
Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.
Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.
Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.
Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.
Power through knowledge
In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.
So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.
It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.
Can Facebook influence an election result?
A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.
This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.
Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.
Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.
While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.
Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.
As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.
Finally, there is the issue of the spread of misinformation.
While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.
In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.
Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.
Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.
The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.
However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:
… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.
Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:
… false news spreads more than the truth because humans, not robots, are more likely to spread it.
The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.
Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.
It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).
Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.
Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.
However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.
At the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.
The Labor Party’s recent decision to ban its candidates from using their own social media accounts as publicity platforms at the next federal election may be a sign that society’s infatuation with social media as a source of news and information is cooling.
Good evidence for this emerged recently with the publication of the 2018 findings from the Edelman Trust Barometer. The annual study has surveyed more than 33,000 people across the globe about how much trust they have in institutions, including government, media, businesses and NGOs.
This year, there was a sharp increase in trust in journalism as a source of news and information, and a decline in trust in social media and search engines for this purpose. Globally, trust in journalism rose five points to 59%, while trust in social media and search engines fell two points to 51% – a gap of eight points.
In Australia, the level of trust in both was below the global average. But the 17 point gap between them was greater – 52% for journalism and 35% for social media and search engines.
Consequences of poor social media savvy
Labor’s decision may also reflect a healthy distrust of its candidates’ judgement about how to use social media for political purposes.
Liberal Senator Jim Molan’s recent sharing of an anti-Islamic post by the British right-wing extremist group Britain First on his Facebook account showed how poor some individual judgements can be.
If ever there was a two-edged sword in politics, social media is it. It gives politicians a weapon with which to cut their way past traditional journalistic gatekeepers and reach the public directly, but it also exposes them to public scrutiny with a relentless intensity that previous generations of politicians never had to endure.
This intensity comes from two sources: the 24/7 news cycle with the associated nonstop interaction between traditional journalism and social media, and the opportunity that digital technology gives everyone to jump instantaneously into public debate.
So Molan’s stupidity, for example, now attracts criticism from the other side of the world. Brendan Cox, the widower of a British politician, Jo Cox, who was murdered by a man yelling “Britain first”, has weighed in.
The interaction between traditional journalism and social media also means journalists can latch onto stories much more quickly because there are countless pairs of eyes and ears out there tipping them off.
The result of this scrutiny is that public figures can never be sure they are off-camera, as it were. This means there has been a significant reduction in their power to control the flow of information about themselves. They are liable to be “on the record” anywhere there is a mic or a smartphone – and may not even know it.
Politics then and now
On Sunday night, the ABC aired part one of the two-part documentary Bob Hawke: The Larrikin and the Leader. In it, Graham Richardson says of Hawke:
He did some appalling things when drunk … He was lucky that he went through an era where he couldn’t be pinged. We didn’t have the internet. We didn’t have mobile phones. Let’s face it, a Bob Hawke today behaving in the same manner would never become prime minister. He’d have been buried long before he got near the parliament.
Would we now think differently of a politician like Bob Hawke if some of his well-documented excesses had been captured and circulated on social media in this way?
Perhaps not. Hawke was of his time, an embodiment of the national mood and of what Australians imagine to be the national larrikin character. He might have thrived.
With Hawke, what you saw was what you got. So he had a built-in immunity to social media’s particular strength: its capacity to show people up as ridiculous, dishonest or hypocritical.
And his political opponent Malcolm Fraser was, in his later years, adept at using Twitter to criticise the government of one of his Liberal successors as Prime Minister, Tony Abbott.
Yet by exerting the iron discipline for which he was famous, saying exactly what he wanted to say and not a word more, Fraser avoided the pitfalls that the likes of Senator Molan stumble into.
Indeed, US President Donald Trump’s reputation for Twitter gaffes hasn’t hurt his popularity among his base, and is even lauded by some as a mark of authenticity.
So it is likely that the politicians of the past would not have fared very differently from those of the present. The competent would have adapted and used social media to their advantage; the incompetent would have been shown up for what they are.
Social platforms under fire
Social media has the potential to strengthen democratic life. It makes all public figures – including journalists – more accountable. But as we have seen, especially in the 2016 US presidential elections, it can also be used to weaken democratic life by amplifying the spread of false information.
As a result, democracies everywhere are wrestling with the overarching problem of how to make the giant social media platforms, especially Facebook, accountable for how they use their publishing power.
Out of all this, one trend seems clear: where news and information is concerned, society is no longer dazzled by the novelty of social media and is wakening to its weaknesses.
The terrorist attacks in Paris have resonated around the world. In addition to physical violence, Islamic State (IS) is pursuing a strategy of socially mediated terrorism. The symbolic responses of its opponents can be predicted and may inadvertently further its aims.
In the emotion of the moment, we need to act. We need to be cautious, however, of symbolic reactions that divide Muslims and non-Muslims. We need emblems that act against the xenophobia that is a recruiting tool for jihadists.
Reactions from the West should not erode the Muslim leadership that is essential to overturning “Islamic State”. Queen Rania of Jordan points out:
What the extremists want is to divide our world along fault lines of religion and culture, and so a lot of people in the West may have stereotypes against Arabs and Muslims. But really this fight is a fight between the civilised world and a bunch of crazy people who want to take us back to medieval times. Once we see it that way, we realise that this is about all of us coming together to defend our way of life.
Queen Rania’s statement characterises the Paris attacks as part of a wider conflict around cultural values. How are these values playing out symbolically across the globe?
Propaganda seeks predictable responses
IS’s socially mediated propaganda is sophisticated and planned. This supports an argument that the Paris attacks are the beginning of a global campaign. Symbolic materials characterise IS as invincible. However, other evidence may indicate that it is weak.
The spontaneous celebration on Twitter by IS supporters was predictable. Its representational coverage of the Paris attacks, however, suggests deep planning.
This planning is embedded in professionally designed images. A reworked image depicts the Eiffel Tower as a triumphal arch with the IS flag flying victoriously on top.
The tower is illuminated and points to the heavens and a God-given victory. The inclusion of a road running through the Eiffel Tower provides a sense of speed, change, even progress. In Arabic, the text states, “We are coming, France” and “The state of Khilafa”.
IS is using symbolic representations of the Paris attacks to garner new recruits.
A sophisticated pre-prepared image of an intrepid fighter walking away from a Paris engulfed in flames was quickly distributed. It is inscribed with the word “France under fire” in Arabic and French.
The five million young Muslims in France are particular targets. Among online recruitment materials are videos calling them to join other young French nationals who are with IS.
Support for the victims in Paris and for the democratic values of liberty, equality and fraternity are embedded in the blue, white and red lights movement. These lights shone in major cities in the US, Britain, Europe, Australia, New Zealand, China, Japan, Taiwan and South America. The blue, white and red lights also were displayed in Egypt, Saudi Arabia, the UAE and Malaysia.
However, the light displays were seen in few countries with Muslim majorities overall. Such countries are in an invidious position. Display the lights and you may be characterized as a lackey of the West. Don’t display the lights and appear unsympathetic to the victims.
Support also is embedded in a parallel Facebook function that allows members to activate a tri-colour filter. Adapted from a rainbow filter used to support same-sex marriage, this filter attracts those with liberal sentiments.
The question of whether to use the French flag to show sympathy for the victims is invidious at a personal level. Many people find themselves exploited and condemned to poverty by neoliberal economic models. They are put in a difficult position. They feel sympathy for the victims. However, they are bitter about how they are being treated by “the West”, including France.
Perils of an ‘us and them’ mindset
As the blue, white and red activism plays out around the globe, there is a potential for this to transform into a symbolic manifestation of an “us and them” mentality. Such a division would support xenophobic forces, which steer recruits towards IS.
The global impact of the attacks can be related to the iconic status of Paris. The attacks hold a personal dimension for millions of people who have visited this city. They have a sense of “there but for the grace of God, go I”. This emotion echoes responses to the destruction of the World Trade Centre in New York in 2001.
The Japanese and Italian cafes included in the attacks are symbolic targets for their countries. In March 2015, IS spokesman Abu Mohammad al-Adnan stated that the group would attack “Paris, before Rome”. Rome is a target because of its symbolic role as the centre of Christianity. Japan is a target because of its role in coalition forces. It has already suffered the execution of Japanese hostages early in 2015.
In Japan, the cultural reaction has been relatively low key, as part of a strategy of minimising terrorist attention. The blue, white and red lights solidarity received minimal press coverage. There have been few reports of the Japanese restaurant that was one of the targets. In addition to factual coverage of the attacks, Japanese reports have concentrated on implications for security at the 2020 Summer Olympics in Tokyo.
Are there any symbols indicating good news? The Syrian passport found near the body of one of the attackers could be a sign of weakness. It could have been “planted” there – why carry a passport on a suicide mission?
If so, its purpose is to increase European xenophobia and encourage the closing of borders to Syrian refugees. This suggests the mass exodus of Muslim refugees from Syria is hurting IS. The propaganda could be a sign of alarm in IS leadership ranks.
In our responses to the Paris attacks, the grief of the West should not be allowed to overshadow the opprobrium of Muslim countries. Muslims are best placed to challenge the Islamic identity of this self-declared state.
As Queen Rania states, the war against IS must be led by Muslims and Arabs. To ensure success, the international community needs to support, not lead, Muslim efforts.
While the average person was getting on with life in Paris before last Friday’s terror bombings and shootings, Twitter threads in Arabic from the Middle East were urging for attacks to be launched upon coalition forces in their home countries.
“Advance, advance – forward, forward” they said, regarding Paris.
Iraqi forces had warned coalition countries one day before the attack that IS’s leader Abu Bakr al-Baghdadi had called for “[…] bombings or assassinations or hostage taking in the coming days”.
In addition, social media message “Telegrams” from The Islamic State Media Center’s Al-Hayat were telling that something more sinister may be afloat, or at least in the works.
Hiding in privacy
Telegram is an app, launched in 2013, that can be set up on almost any device and allows messages to be sent to users, with a strong focus on privacy.
An important tool that agencies use to tackle violent extremism is that of counter-narratives. The aim here is address and challenge propaganda and misinformation being disseminated by IS to potential recruits or IS sympathisers.
This is used as a form of disruption to the flow of information and recruitment process. But with Telegrams – since information moves in one direction – it makes it harder to counter jihad propaganda and lies.
Telegrams is used by IS to not just post propaganda, but to spread training manuals, advice on how to obtain and import weapons, how to make bombs and how to perform single jihadi attacks on individuals with household equipment.
It has posts on launching attacks at soft targets and the activation of lone-wolf style attacks, or give the green light for small terrorists pockets or cells within the community to conduct their onslaught.
Inciting acts of violence is a key element of IS’s radical religious ideology. It mandates that its people are following the “true” path of Allah and are helping to bring to pass a great apocalyptic battle between coalition forces and “Rome”, which to them is the will of Allah.
Social media advantage
Social media is prominent in recruitment strategies used by terrorist groups, in particular, IS.
Facebook is a key platform to gather young fans, supporters and recruits to incite them to acts of violence by the means of propaganda and the use of Islamic grievance.
When it comes to real-time orchestrating of terror events, IS is adopting encrypted messaging applications – including Kik, Surespot, Wickr and Telegram, as previously mentioned – that are very difficult to compromise or even hack.
What is advantageous for IS is that messages being sent have what is termed a “burn time” which means they will be deleted after a certain time and will not show up on a phone or other device.
This benefits recruiters as it means they can fly under the radar more readily which makes it more difficult for agencies to detect and prevent attacks.
Also, IS is using the PlayStation 4 network to recruit and plan attacks. Belgium’s deputy prime minister and minister of security and home affairs, Jan Jambon, said PlayStation4 was more difficult for authorities to monitor than WhatsApp and other applications.
After the Paris attacks
Not long after the attacks in Paris, IS released an audio and written statement claiming the attack as its own from command central. This was systematically and widely broadcast across social media platforms.
Contained in this statement were future warnings that “[…] this is just the beginning of attacks […]”. At the same time, a propaganda video entitled “What are you waiting for?” was circulated on Facebook, Twitter and Telegrams.
IS continues to use social media as part of its terror campaign. Its aim is to maintain the focus of its recruits and fighters within coalition countries. It also aims to further recruit home-grown jihadists to acts of violence while driving fear into the heartland of European and Western countries.
While privacy is something on everyone’s mind, encryption applications have gained much momentum to allow people to communicate without worrying about unwanted third party access.
Unfortunately, terrorists have also utilised these features as a means to go undetected in organising real-time operations and preparation for terrorist attacks.
Terrorists are ahead of the A-game and we don’t want to be playing continual catch-up. If terrorists are to continue using these applications to arrange acts of terrorism in a covert manner, then security agencies need to be able to balance the collection of information from technological advanced services with that of human intelligence.
Dealing with the threat of misuse of encrypted applications by IS and other terror organisations, would mean that law enforcement and agencies would require access to encrypted communications. While one could argue this may compromise data security and that it should also be assessed alongside internet vulnerabilities, this must be balanced against the current climate of security threat both domestically and internationally.