With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.
Invariably, this discussion leads straight to digital.
In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.
Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.
In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.
The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.
Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.
Platforms like Facebook offer services that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with tailored messaging.
Voting, persuasion and mobilisation
While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often claimed to have.
The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.
While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as Trump in the US.
Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.
Data privacy and foreign interference
The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.
An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a majority of voters see a distinction between commercial entities advertising to us online compared to parties and other campaigners.
We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.
The more vexing issue is that of foreign interference. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.
This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the broader strategy.
But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.
With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.
As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.
After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.
Both platforms have made public statements about their efforts at moderation.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]
Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:
the length of time it was available on Facebook’s platform before it was removed
the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.
These issues illustrate the weaknesses of existing content moderation policies and practices.
Not an easy task
Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.
People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.
And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.
For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.
Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.
A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:
companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.
A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.
In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.
The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.
In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.
The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.
In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.
In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.
There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.
In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.
American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:
didn’t just want to commit murder – he wanted the reward of attention, for having done it.
Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.
Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.
While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.
That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.
It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.
The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.
Simply watching the video can cause trauma
Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.
While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.
Rewarding the spectacle
Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.
I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.
Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.
But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.
We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.
To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.
If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.
For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.
Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.
At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.
Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.
It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.
But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.
Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).
Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.
That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).
What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.
Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.
Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:
…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.
In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…
Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.
It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).
After the grilling
Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.
It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.
This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.
Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.
Ideally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.
Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.
Graham Mudd, Facebook’s product marketing director, said in a statement:
We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.
Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.
The invisible industry worth billions
In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.
The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.
These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.
Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.
In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.
The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.
Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.
With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.
More transparency and more respect for users
To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.
Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.
In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.
In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.
Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.
In some ways, social media has been a boon for human rights – most obviously for freedom of speech.
Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.
But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.
Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.
Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.
YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.
The business model and human rights
Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.
Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.
Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.
Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.
Power through knowledge
In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.
So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.
It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.
A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.
This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.
Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.
Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.
While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.
Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.
As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.
Finally, there is the issue of the spread of misinformation.
While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.
In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.
Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.
Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.
The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.
However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:
… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.
Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:
… false news spreads more than the truth because humans, not robots, are more likely to spread it.
The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.
Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.
It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).
Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.
Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.
However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.
At the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.
In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.
Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.
This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.
It’s easy for friends to share our data
In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.
Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.
What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.
Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.
This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.
How much data are we talking about?
With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.
Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.
The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.
Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.
We’re more likely to refuse data requests that are specific
However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.
We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.
The silver lining is more vigilance
This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.
Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.
The silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.
The Labor Party’s recent decision to ban its candidates from using their own social media accounts as publicity platforms at the next federal election may be a sign that society’s infatuation with social media as a source of news and information is cooling.
Good evidence for this emerged recently with the publication of the 2018 findings from the Edelman Trust Barometer. The annual study has surveyed more than 33,000 people across the globe about how much trust they have in institutions, including government, media, businesses and NGOs.
This year, there was a sharp increase in trust in journalism as a source of news and information, and a decline in trust in social media and search engines for this purpose. Globally, trust in journalism rose five points to 59%, while trust in social media and search engines fell two points to 51% – a gap of eight points.
In Australia, the level of trust in both was below the global average. But the 17 point gap between them was greater – 52% for journalism and 35% for social media and search engines.
Labor’s decision may also reflect a healthy distrust of its candidates’ judgement about how to use social media for political purposes.
Liberal Senator Jim Molan’s recent sharing of an anti-Islamic post by the British right-wing extremist group Britain First on his Facebook account showed how poor some individual judgements can be.
If ever there was a two-edged sword in politics, social media is it. It gives politicians a weapon with which to cut their way past traditional journalistic gatekeepers and reach the public directly, but it also exposes them to public scrutiny with a relentless intensity that previous generations of politicians never had to endure.
This intensity comes from two sources: the 24/7 news cycle with the associated nonstop interaction between traditional journalism and social media, and the opportunity that digital technology gives everyone to jump instantaneously into public debate.
So Molan’s stupidity, for example, now attracts criticism from the other side of the world. Brendan Cox, the widower of a British politician, Jo Cox, who was murdered by a man yelling “Britain first”, has weighed in.
The interaction between traditional journalism and social media also means journalists can latch onto stories much more quickly because there are countless pairs of eyes and ears out there tipping them off.
The result of this scrutiny is that public figures can never be sure they are off-camera, as it were. This means there has been a significant reduction in their power to control the flow of information about themselves. They are liable to be “on the record” anywhere there is a mic or a smartphone – and may not even know it.
He did some appalling things when drunk … He was lucky that he went through an era where he couldn’t be pinged. We didn’t have the internet. We didn’t have mobile phones. Let’s face it, a Bob Hawke today behaving in the same manner would never become prime minister. He’d have been buried long before he got near the parliament.
Would we now think differently of a politician like Bob Hawke if some of his well-documented excesses had been captured and circulated on social media in this way?
Perhaps not. Hawke was of his time, an embodiment of the national mood and of what Australians imagine to be the national larrikin character. He might have thrived.
With Hawke, what you saw was what you got. So he had a built-in immunity to social media’s particular strength: its capacity to show people up as ridiculous, dishonest or hypocritical.
And his political opponent Malcolm Fraser was, in his later years, adept at using Twitter to criticise the government of one of his Liberal successors as Prime Minister, Tony Abbott.
Yet by exerting the iron discipline for which he was famous, saying exactly what he wanted to say and not a word more, Fraser avoided the pitfalls that the likes of Senator Molan stumble into.
Indeed, US President Donald Trump’s reputation for Twitter gaffes hasn’t hurt his popularity among his base, and is even lauded by some as a mark of authenticity.
So it is likely that the politicians of the past would not have fared very differently from those of the present. The competent would have adapted and used social media to their advantage; the incompetent would have been shown up for what they are.
Social media has the potential to strengthen democratic life. It makes all public figures – including journalists – more accountable. But as we have seen, especially in the 2016 US presidential elections, it can also be used to weaken democratic life by amplifying the spread of false information.
As a result, democracies everywhere are wrestling with the overarching problem of how to make the giant social media platforms, especially Facebook, accountable for how they use their publishing power.
Out of all this, one trend seems clear: where news and information is concerned, society is no longer dazzled by the novelty of social media and is wakening to its weaknesses.
As Queensland approaches its election day on Saturday, the social media campaign for votes continues alongside. But over the final two weeks, the focus of that campaign has gradually shifted.
Labor Premier Annastacia Palaszczuk’s plan to veto a potential A$1 billion loan to the Adani mine project resulted in a considerable drop in Adani-related tweets directed at Queensland candidates, and that pattern has held through subsequent weeks. Labor has not entirely neutralised the Adani controversy, but the mine project is no longer the major talking point of the Twitter campaign.
By contrast, the most significant emerging theme of these past two weeks has been the role that Pauline Hanson’s One Nation Party might play in the new parliament. We saw some of this in our previous analysis, in response to the LNP’s decision to direct preferences to One Nation over Labor in a majority of Queensland seats. That particular discussion has now shifted to a much broader debate about the very real prospect that One Nation may hold the balance of power after the election.
Our dataset captures the tweets posted by and directed at Queensland election candidates. Of those tweets, some 51% addressed the Adani mine or One Nation, but the emphasis has now swung considerably towards the latter. This was sparked in part by the Liberal National Party’s (LNP) preference announcement, with preferences briefly becoming a distinct major topic in their own right.
Labor has been quick to exploit this arrangement, in well-shared posts from the central party account. However, recent controversial footage of its own MP Jo-Ann Miller hugging Pauline Hanson on the campaign trail might have blunted this message somewhat.
One Nation also featured heavily in another major topic of the second half of the campaign: schools. While Labor’s pledge to establish several new schools received only moderate attention, Queensland One Nation leader Steve Dickson’s bizarre comments about the Safe Schools anti-bullying programme was met with widespread condemnation. A tweet criticising Dickson’s subsequent apology is now the second most retweeted post of the entire campaign:
These topical changes have affected the patterns of engagement with the candidates on Twitter. In total, Labor candidates still continue to be mentioned more frequently than their LNP counterparts. But over the past two weeks, this gap has closed slightly: as attention has shifted from Adani to One Nation, so have Twitter users moved to asking more questions of LNP and One Nation rather than Labor politicians. Retweets, however, continue to favour Labor by a considerable margin: its candidates have received more than four times as many retweets as all other party candidates put together.
A network of interactions around candidate accounts (combining both @mentions and retweets over the course of the entire campaign) demonstrates the state of play at this late stage of the election campaign. Labor commands the largest engagement network, at the centre of the graph. Discussions about Adani have been prominent, and form a distinct cluster of debate that is most closely interconnected with the Labor and Greens networks.
Meanwhile, LNP and One Nation candidates are mentioned frequently alongside one another. These tweets are often asking about their preference arrangements or their willingness to work together in the absence of an outright majority for either major party.
This association is so strong, in fact, that our visualisation algorithm treats both groups as part of the same discussion cluster. Slightly to the side of this sits the Uber debate, which therefore appears to be more closely associated with – and perhaps supported by – LNP candidates than their Labor counterparts.
The picture that emerges here is one which points to the strengths and weaknesses of both sides of politics. For Labor, its troubled path to a firmer stance on the Adani mine may remain in environmentally conscious voters’ minds even if the online discussion has died down somewhat.
For the LNP, the emerging view that its best path to government is through an arrangement with One Nation will similarly dent the electorate’s enthusiasm for a change of government. That Labor commands by far the majority of retweets for its messages may give it hope, though – at least in urban electorates, where Twitter is likely to have its greatest footprint.
There’s still plenty of time to go in the current Queensland state election campaign, but early signs from the social media trail offer some encouragement for Labor premier Annastacia Palaszczuk. She is receiving considerably more retweets than Liberal Opposition Leader Tim Nicholls, and chatter about the controversial Adani mine project has declined in recent days.
Twitter and Facebook are now a standard part of the campaigning toolkit for all major parties. Previous state and federal campaigns suggest that voters who’ve already seen a party’s messages in their social media feeds may be a little more open to a chat when the local candidate comes doorknocking. (Labor’s internal review of its 2013 campaign stresses the combination of online and in-person campaigning, for example.)
On Twitter, we’ve identified 60 Labor and 48 Liberal National Party candidates, as well as central party and campaign accounts. The Greens are represented by 34 accounts, while One Nation and Katter’s Australian Party each have only a handful of tweeting candidates. Combined, over the first two weeks of the campaign, they’ve sent some 3,300 tweets in total, and received some 54,000 @mentions and retweets.
These are far from evenly distributed, however. @mentions of parties and politicians tend to favour the incumbent, and this is not surprising: more of the debate on social media and elsewhere will be about the track record of the current government, rather than about the promises of the opposition.
It’s the retweets that tell a more remarkable story. The nearly 7,000 retweets for Labor candidates’ tweets amount to more than twelve times the 570 retweets received by the LNP. During an election campaign, retweets usually do indicate some level of endorsement.
The pattern in this election is considerably different from recent elections. In 2016, for example, the incumbent federal Coalition received far fewer retweets than the Labor opposition. In the 2015 Queensland election, Campbell Newman’s incumbent Liberal National Party government also struggled to attract retweets for its messages.
These patterns do not point to a significant mood for change or substantial willingness amongst Twitter users to promote the LNP’s campaign messages. Conservative commentators may want to chalk this up to a purported left-wing bias in the Australian Twittersphere – but that claim is not borne out by our analysis, showing Twitter contains sizeable communities of both left-wing and right-wing supporters.
Adani and One Nation generate heat for the major parties
Labor also seems to have weathered the early onslaught of critical coverage well.
The first week of the campaign saw a substantial volume of debate about the controversial Adani mine project, which divides opinion between the southeastern population centres around Brisbane (where concerns about environmental impacts are high) and the regional centres near the mine (which anticipate greater job prospects from the mine).
During week one some 1,500 tweets per day, both by and to candidates, contained the word “Adani”. Hashtags related to the controversy (#adani, #stopadani, #coralnotcoal, and others) were the most prominent topical hashtags in our overall dataset, in addition to generic tags like #qldvotes, #qldpol, and #auspol.
The story is further complicated by the fact that, in his role at PricewaterhouseCoopers, Premier Palaszczuk’s partner was involved in Adani’s application for a A$1 billion loan. Palaszczuk announced at the end of the first week of campaigning that she would veto that loan if the application were successful.
Judging by our Twitter data, this veto threat appears to have neutralised the Adani debate to some extent. “Adani” tweets by and to candidates declined from 1,500 to less than 600 in week two. The overall volume of tweets by and to these accounts has also dropped from over 5,000 to some 3,700 per day in week two.
This shift in position may indicate that Labor believes that supporting Adani will lose more votes in the southeast than it will gain further north. Our social media patterns seem to bear out this view.
Meanwhile, with Pauline Hanson’s much-publicised arrival on the campaign trail the second week has seen more discussion about the role that One Nation may play in the next parliament. In particular, the announcement on the evening of Friday 10 November that the LNP will preference One Nation over Labor in more than half the seats in Queensland has already generated substantial debate. Some 20% of tweets by and to candidates on the following Saturday included keywords related to One Nation and/or preferencing.
While the LNP announcement – after the evening news on a Friday – was probably timed to minimise media scrutiny of its decision, it remains to be seen whether this debate will carry over into the third week of the campaign. Labor will no doubt seek to exploit this preference arrangement to attract traditional conservative voters who remain critical of One Nation.
And finally, if you’re still uncertain about which hashtag to use to join the debate: in tweets by and to candidate accounts, plain old #qldvotes leads #qldvotes2017 by more than ten to one so far. It’s a landslide.