If you have kids, chances are you’ve worried about their presence on social media.
Who are they talking to? What are they posting? Are they being bullied? Do they spend too much time on it? Do they realise their friends’ lives aren’t as good as they look on Instagram?
We asked five experts if social media is damaging to children and teens.
Four out of five experts said yes
The four experts who ultimately found social media is damaging said so for its negative effects on mental health, disturbances to sleep, cyberbullying, comparing themselves with others, privacy concerns, and body image.
However, they also conceded it can have positive effects in connecting young people with others, and living without it might even be more ostracising.
The dissident voice said it’s not social media itself that’s damaging, but how it’s used.
Here are their detailed responses:
If you have a “yes or no” health question you’d like posed to Five Experts, email your suggestion to: firstname.lastname@example.org
Karyn Healy is a researcher affiliated with the Parenting and Family Support Centre at The University of Queensland and a psychologist working with schools and families to address bullying. Karyn is co-author of a family intervention for children bullied at school. Karyn is a member of the Queensland Anti-Cyberbullying Committee, but not a spokesperson for this committee; this article presents only her own professional views.
Finally, some good news from the weirdo-sphere that is social media. Twitter CEO Jack Dorsey has announced that, effective November 22, the microblogging platform will ban all political advertising – globally.
This is a momentous move by Twitter. It comes when Facebook and its CEO Mark Zuckerberg are under increasing pressure to deal with the amount of mis- and disinformation published via paid political advertising on Facebook.
Zuckerberg recently told a congress hearing Facebook had no plans of fact-checking political ads, and he did not answer a direct question from Congresswoman Alexandria Ocasio-Cortez if Facebook would take down political ads found to be untrue. Not a good look.
A few days after Zuckerberg’s train wreck appearance before the congress committee, Twitter announced its move.
While Twitter should get credit for its sensible move, the microblogging company is tiny compared to Facebook and Google. So, until the two giants change, Twitter’s political ad ban will have little effect on elections around the globe.
A symptom of the democratic flu
It’s important to call out Google on political advertising. The company often manages to fly under the radar on this issue, hiding behind Facebook, which takes most of the flack.
The global social media platforms are injecting poison into liberal democratic systems around the globe. The misinformation and outright lies they allow to be published on their platforms is partly responsible for the increasingly bitter deep partisan divides between different sides of politics in most mature liberal democracies.
Add to this the micro targeting of voters illustrated by the Cambridge Analytica scandal, and a picture emerges of long-standing democratic systems under extreme stress. This is clearly exemplified by the UK parliament’s paralysis over Brexit and the canyon-deep political divides in the US.
Banning political advertising only deals with a symptom of the democratic flu the platforms are causing. The root cause of the flu is the fact social media platforms are no longer only platforms – they are publishers.
Until they acknowledge this and agree to adhere to the legal and ethical frameworks connected with publishing, our democracies will not recover.
Not platforms, but publishers
Being a publisher is complex and much more expensive than being a platform. You have to hire editorial staff (unless you can create algorithms advanced enough to do editorial tasks) to fact-check, edit and curate content. And you have to become a good corporate citizen, accepting you have social responsibilities.
Convincing the platforms to accept their publisher role is the most long-term and sustainable way of dealing with the current toxic content issue.
Accepting publisher status could be a win-win, where the social media companies rebuild trust with the public and governments by acting ethcially and socially responsibly, stopping the poisoning of our democracies.
Mark Zuckerberg claims Facebook users being able to publish lies and misinformation is a free speech issue. It is not. Free speech is a privilege as well as a right and, like all privileges, it comes with responsibilities and limitations.
Examples of limitations are defamation laws and racial vilification and discrimination laws. And that’s just the legal framework. The strong ethical frame work that applies to publishing should be added to this.
Ownership concentration like never before
Then, there’s the global social media oligopoly issue. Never before in recorded human history have we seen any industry achieve a level of ownership concentration displayed by the social media companies. This is why this issue is so deeply serious. It’s global, it reaches billions and the money and profits involved is staggering.
Facebook co-founder, Chris Hughes, got it absolutely right when he in his New York Times article pointed out the Federal Trade Commission – the US equivalent to the Australian Competition and Consumer Commission – got it wrong when they allowed Facebook to buy Instagram and WhatsApp.
Hughes wants Facebook broken up and points to the attempts from parts of US civil society moving in this direction. He writes:
This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.
Yesterday, I posted on my Facebook timeline for the first time since the Cambridge Analytica scandal broke. I made the point that after Twitter’s announcement, the ball is now squarely in Facebook’s and Google’s courts.
For research and professional reasons, I cannot delete my Facebook account. But I can pledge to not be an active Facebook user until the company grows up and shoulders its social responsibility as an ethical publisher that enhances our democracies instead of undermining them.
A high school student from Ohio made national headlines recently by getting inoculated despite his family’s anti-vaccination beliefs.
Ethan Lindenberger, 18, who never had been vaccinated, had begun to question his parents’ decision not to immunize him. He went online to research and ask questions, posting to Reddit, a social discussion website, about how to be vaccinated. His online quest went viral.
In March 2019, he was invited to testify before a U.S. Senate Committee hearing on vaccines and preventable disease outbreaks. In his testimony, he said that his mother’s refusal to vaccinate him was informed partly by her online research and the misinformation about vaccines she found on the web.
Lindenberger’s mother is hardly alone. Public health experts have blamed online anti-vaccination discussions in part for New York’s worst measles outbreak in 30 years. Anti-vaccine activists also have been cited for the growth of anti-vaccination sentiments in the U.S. and abroad.
We are associate professors who study health communication. We are also parents who read online vaccination-related posts, and we decided to conduct research to better understand people’s communication behaviors related to childhood vaccinations. Our research examined the voices most central to this discussion online, mothers, and our findings show that those who oppose vaccinations communicate most about this issue.
What prompts mothers to speak out
A strong majority of parents in the U.S. support vaccinations, yet at the same time, anti-vaccination rates in the U.S. and globally are rising. The World Health Organization identified the reluctance or refusal to vaccinate despite the availability of vaccines as one of 10 top threats to global health in 2019.
Mothers are critical decision-makers in determining whether their children should be vaccinated. In our study, we surveyed 455 mothers online to determine who communicates most about vaccinations and why.
In general, previous research has shown that people evaluate opinion climates – what the majority opinion seems to say – before expressing their own ideas about issues. This is true particularly on controversial subjects such as affirmative action, abortion or immigration. If an individual perceives their opinion to be unpopular, they may be less likely to say what they think, especially if an issue receives a lot of media attention, a phenomenon known as the spiral of silence.
If individuals, however, have strong beliefs about an issue, they may express their opinions whether they are commonly held or minority perspectives. These views can dominate conversations as others online find support for their views and join in.
Our recent study found that mothers who contributed information online shared several perspectives. Mothers who didn’t strongly support childhood vaccinations were more likely to seek, pay attention to, forward information and speak out about the issue – compared to those who do support childhood vaccinations.
Those who believed that vaccinations were an important issue (whether they were for or against them) were more likely to express an opinion. And those who opposed vaccinations were more likely to post their beliefs online.
How social media skews facts
Online news content can be influenced by social media information that millions of people read, and it can amplify minority opinions and health myths. For example, Twitter and Reddit posts related to the vaccine-autism myth can drive news coverage.
Those who expressed online opinions about vaccinations also drove news coverage. Other research we co-authored shows that posts related to the vaccine-autism myth were followed by online news stories related to tweets in the U.S., Canada and the U.K.
Recent reports about social media sites, such as Facebook, trying to interrupt false health information from spreading can help correct public misinformation. However, it is unclear what types of communication will counter misinformation and myths that are repeated and reinforced online.
Our work suggests that those who agree with the scientific facts about vaccination may not feel the need to pay attention to this issue or voice their opinions online. They likely already have made up their minds and vaccinated their children.
But from a health communication perspective, it is important that parents who support vaccination voice their opinions and experiences, particularly in online environments.
Studies show that how much parents trust or distrust doctors, scientists or the government influences where they land in the vaccination debate. Perspectives of other parents also provide a convincing narrative to understand the risks and benefits of vaccination.
Scientific facts and messaging about vaccines, such as information from organizations like the World Health Organization and the Centers for Disease Control and Prevention, are important in the immunization debate.
But research demonstrates that social consensus, informed in part by peers and other parents, is also an effective element in conversations that shape decisions.
If mothers or parents who oppose or question vaccinations continue to communicate, while those who support vaccinations remain silent, a false consensus may grow. This could result in more parents believing that a reluctance to vaccinate children is the norm – not the exception.
With concerns growing worldwide about the economic power of digital technology giants such as Google and Facebook, there was plenty of interest internationally in Australia’s Digital Platforms Inquiry.
The Australian Competition and Consumer Commission (ACCC) inquiry was seen as undertaking a forensic account of market dominance by digital platforms, and the implications for Australian media and the rights of citizens around privacy and data protection.
But the major limitation facing the ACCC, and the Australian government, in developing new regulations for digital platforms is jurisdictional authority – given these companies are headquartered in the United States.
More ‘platform neutral’ approach
Among the ACCC’s 23 recommendations is a proposal to reform media regulations to move from the current platform-specific approaches (different rules for television, radio, and print media) towards a “platform-neutral” approach.
This will ensure comparable functions are effectively and consistently regulated:
Digitalisation and the increase in online sources of news and media content highlight inconsistencies in the current sector-specific approach to media regulation in Australia […]
Digital platforms increasingly perform similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online. Despite this, virtually no media regulation applies to digital platforms.
The ACCC’s recommendations to harmonise regulations across different types of media draw on major Australian public enquiries from the early 2010s, such as the Convergence Review and the Australian Law Reform Commission’s review of the national media classification system. These reports identified the inappropriateness of “silo-ised” media laws and regulations in an age of digital convergence.
The ACCC also questions the continued appropriateness of the distinction between platforms and publishers in an age where the largest digital platforms are not simply the carriers of messages circulated among their users.
The report observes that such platforms are increasingly at the centre of digital content distribution. Online consumers increasingly access social news through platforms such as Facebook and Google, as well as video content through YouTube.
The advertising dollar
While the ACCC inquiry focused on the impact of digital platforms on news, we can see how they have transformed the media landscape more generally, and where issues of the wider public good arise.
Their dominance over advertising has undercut traditional media business models. Online now accounts for about 50% of total advertising spend, and the ACCC estimates that 71 cents of every dollar spent on digital advertising in Australia goes to Google or Facebook.
All media are now facing the implications of a more general migration to online advertising, as platforms can better micro-target consumers rather than relying on the broad brush approach of mass media advertising.
The larger issue facing potential competitors to the digital giants is the accumulation of user data. This includes the lack of transparency around algorithmic sorting of such data, and the capacity to use machine learning to apply powerful predictive analytics to “big data”.
In line with recentcritiques of platform capitalism, the ACCC is concerned about the lack of information consumers have about what data the platforms hold and how it’s being used.
It’s also concerned the “winner-takes-most” nature of digital markets creates a long term structural crisis for media businesses, with particularly severe implications for public interest journalism.
Digital platform companies do not sit easily within a recognisable industry sector as they branch across information technology, content media, and advertising.
They’re also not alike. While all rely on the capacity to generate and make use of consumer data, their business models differ significantly.
The ACCC chose to focus only on Google and Facebook, but they are quite different entities.
Google dominates search advertising and is largely a content aggregator, whereas Facebook for the most part provides display advertising that accompanies user-generated social media. This presents its own challenges in crafting a regulatory response to the rise of these digital platform giants.
A threshold issue is whether digital platforms should be understood to be media businesses, or businesses in a more generic sense.
Communications policy in the 1990s and 2000s commonly differentiated digital platforms as carriers. This indemnified them from laws and regulations relating to content that users uploaded onto their sites.
But this carriage/content distinction has always coexisted with active measures on the part of the platform companies to manage content that is hosted on their sites. Controversies around content moderation, and the legal and ethical obligations of platform providers, have accelerated greatly in recent years.
To the degree that companies such as Google and Facebook increasingly operate as media businesses, this would bring aspects of their activities within the regulatory purview of the Australian Communication and Media Authority (ACMA).
The ACCC recommended ACMA should be responsible for brokering a code of conduct governing commercial relationships between the digital platforms and news providers.
This would give it powers related to copyright enforcement, allow it to monitor how platforms are acting to guarantee the trustworthiness and reliability of news content, and minimise the circulation of “fake news” on their sites.
Overseas, but over here
Companies such as Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.
The capacity to address competition and market dominance issues is limited by the fact real action could only meaningfully occur in their home market of the US.
Australian regulators are going to need to work closely with their counterparts in other countries and regions: the US and the European Union are the two most significant in this regard.
But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.
With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.
As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.
After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.
Both platforms have made public statements about their efforts at moderation.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]
Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:
the length of time it was available on Facebook’s platform before it was removed
the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.
These issues illustrate the weaknesses of existing content moderation policies and practices.
Not an easy task
Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.
People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.
And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.
For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.
Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.
A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:
companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.
A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.
In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.
The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.
In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.
The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.
In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.
In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.
There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.
In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.
American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:
didn’t just want to commit murder – he wanted the reward of attention, for having done it.
Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.
Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.
While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.
That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.
It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.
The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.
Simply watching the video can cause trauma
Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.
While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.
Rewarding the spectacle
Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.
I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.
Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.
But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.
We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.
Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.
Graham Mudd, Facebook’s product marketing director, said in a statement:
We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.
Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.
The invisible industry worth billions
In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.
The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.
These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.
Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.
In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.
The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.
Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.
With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.
More transparency and more respect for users
To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.
Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.
In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.
In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.
Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.
In some ways, social media has been a boon for human rights – most obviously for freedom of speech.
Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.
But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.
Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.
Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.
YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.
The business model and human rights
Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.
Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.
Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.
Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.
Power through knowledge
In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.
So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.
It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.
A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.
This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.
Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.
Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.
While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.
Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.
As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.
Finally, there is the issue of the spread of misinformation.
While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.
In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.
Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.
Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.
The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.
However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:
… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.
Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:
… false news spreads more than the truth because humans, not robots, are more likely to spread it.
The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.
Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.
It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).
Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.
Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.
However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.
At the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.
The Labor Party’s recent decision to ban its candidates from using their own social media accounts as publicity platforms at the next federal election may be a sign that society’s infatuation with social media as a source of news and information is cooling.
Good evidence for this emerged recently with the publication of the 2018 findings from the Edelman Trust Barometer. The annual study has surveyed more than 33,000 people across the globe about how much trust they have in institutions, including government, media, businesses and NGOs.
This year, there was a sharp increase in trust in journalism as a source of news and information, and a decline in trust in social media and search engines for this purpose. Globally, trust in journalism rose five points to 59%, while trust in social media and search engines fell two points to 51% – a gap of eight points.
In Australia, the level of trust in both was below the global average. But the 17 point gap between them was greater – 52% for journalism and 35% for social media and search engines.
Labor’s decision may also reflect a healthy distrust of its candidates’ judgement about how to use social media for political purposes.
Liberal Senator Jim Molan’s recent sharing of an anti-Islamic post by the British right-wing extremist group Britain First on his Facebook account showed how poor some individual judgements can be.
If ever there was a two-edged sword in politics, social media is it. It gives politicians a weapon with which to cut their way past traditional journalistic gatekeepers and reach the public directly, but it also exposes them to public scrutiny with a relentless intensity that previous generations of politicians never had to endure.
This intensity comes from two sources: the 24/7 news cycle with the associated nonstop interaction between traditional journalism and social media, and the opportunity that digital technology gives everyone to jump instantaneously into public debate.
So Molan’s stupidity, for example, now attracts criticism from the other side of the world. Brendan Cox, the widower of a British politician, Jo Cox, who was murdered by a man yelling “Britain first”, has weighed in.
The interaction between traditional journalism and social media also means journalists can latch onto stories much more quickly because there are countless pairs of eyes and ears out there tipping them off.
The result of this scrutiny is that public figures can never be sure they are off-camera, as it were. This means there has been a significant reduction in their power to control the flow of information about themselves. They are liable to be “on the record” anywhere there is a mic or a smartphone – and may not even know it.
He did some appalling things when drunk … He was lucky that he went through an era where he couldn’t be pinged. We didn’t have the internet. We didn’t have mobile phones. Let’s face it, a Bob Hawke today behaving in the same manner would never become prime minister. He’d have been buried long before he got near the parliament.
Would we now think differently of a politician like Bob Hawke if some of his well-documented excesses had been captured and circulated on social media in this way?
Perhaps not. Hawke was of his time, an embodiment of the national mood and of what Australians imagine to be the national larrikin character. He might have thrived.
With Hawke, what you saw was what you got. So he had a built-in immunity to social media’s particular strength: its capacity to show people up as ridiculous, dishonest or hypocritical.
And his political opponent Malcolm Fraser was, in his later years, adept at using Twitter to criticise the government of one of his Liberal successors as Prime Minister, Tony Abbott.
Yet by exerting the iron discipline for which he was famous, saying exactly what he wanted to say and not a word more, Fraser avoided the pitfalls that the likes of Senator Molan stumble into.
Indeed, US President Donald Trump’s reputation for Twitter gaffes hasn’t hurt his popularity among his base, and is even lauded by some as a mark of authenticity.
So it is likely that the politicians of the past would not have fared very differently from those of the present. The competent would have adapted and used social media to their advantage; the incompetent would have been shown up for what they are.
Social media has the potential to strengthen democratic life. It makes all public figures – including journalists – more accountable. But as we have seen, especially in the 2016 US presidential elections, it can also be used to weaken democratic life by amplifying the spread of false information.
As a result, democracies everywhere are wrestling with the overarching problem of how to make the giant social media platforms, especially Facebook, accountable for how they use their publishing power.
Out of all this, one trend seems clear: where news and information is concerned, society is no longer dazzled by the novelty of social media and is wakening to its weaknesses.