Australian media regulators face the challenge of dealing with global platforms Google and Facebook



‘Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.’
Shutterstock/Roman Pyshchyk

Terry Flew, Queensland University of Technology

With concerns growing worldwide about the economic power of digital technology giants such as Google and Facebook, there was plenty of interest internationally in Australia’s Digital Platforms Inquiry.

The Australian Competition and Consumer Commission (ACCC) inquiry was seen as undertaking a forensic account of market dominance by digital platforms, and the implications for Australian media and the rights of citizens around privacy and data protection.

The inquiry’s final report, released last month, has been analysed from perspectives such as competition policy, consumer protection and the future of journalism.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


But the major limitation facing the ACCC, and the Australian government, in developing new regulations for digital platforms is jurisdictional authority – given these companies are headquartered in the United States.

More ‘platform neutral’ approach

Among the ACCC’s 23 recommendations is a proposal to reform media regulations to move from the current platform-specific approaches (different rules for television, radio, and print media) towards a “platform-neutral” approach.

This will ensure comparable functions are effectively and consistently regulated:

Digitalisation and the increase in online sources of news and media content highlight inconsistencies in the current sector-specific approach to media regulation in Australia […]

Digital platforms increasingly perform similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online. Despite this, virtually no media regulation applies to digital platforms.

The ACCC’s recommendations to harmonise regulations across different types of media draw on major Australian public enquiries from the early 2010s, such as the Convergence Review and the Australian Law Reform Commission’s review of the national media classification system. These reports identified the inappropriateness of “silo-ised” media laws and regulations in an age of digital convergence.




Read more:
What Australia’s competition boss has in store for Google and Facebook


The ACCC also questions the continued appropriateness of the distinction between platforms and publishers in an age where the largest digital platforms are not simply the carriers of messages circulated among their users.

The report observes that such platforms are increasingly at the centre of digital content distribution. Online consumers increasingly access social news through platforms such as Facebook and Google, as well as video content through YouTube.

The advertising dollar

While the ACCC inquiry focused on the impact of digital platforms on news, we can see how they have transformed the media landscape more generally, and where issues of the wider public good arise.

Their dominance over advertising has undercut traditional media business models. Online now accounts for about 50% of total advertising spend, and the ACCC estimates that 71 cents of every dollar spent on digital advertising in Australia goes to Google or Facebook.

All media are now facing the implications of a more general migration to online advertising, as platforms can better micro-target consumers rather than relying on the broad brush approach of mass media advertising.

The larger issue facing potential competitors to the digital giants is the accumulation of user data. This includes the lack of transparency around algorithmic sorting of such data, and the capacity to use machine learning to apply powerful predictive analytics to “big data”.

In line with recent critiques of platform capitalism, the ACCC is concerned about the lack of information consumers have about what data the platforms hold and how it’s being used.

It’s also concerned the “winner-takes-most” nature of digital markets creates a long term structural crisis for media businesses, with particularly severe implications for public interest journalism.

Digital diversity

Digital platform companies do not sit easily within a recognisable industry sector as they branch across information technology, content media, and advertising.

They’re also not alike. While all rely on the capacity to generate and make use of consumer data, their business models differ significantly.

The ACCC chose to focus only on Google and Facebook, but they are quite different entities.

Google dominates search advertising and is largely a content aggregator, whereas Facebook for the most part provides display advertising that accompanies user-generated social media. This presents its own challenges in crafting a regulatory response to the rise of these digital platform giants.

A threshold issue is whether digital platforms should be understood to be media businesses, or businesses in a more generic sense.

Communications policy in the 1990s and 2000s commonly differentiated digital platforms as carriers. This indemnified them from laws and regulations relating to content that users uploaded onto their sites.

But this carriage/content distinction has always coexisted with active measures on the part of the platform companies to manage content that is hosted on their sites. Controversies around content moderation, and the legal and ethical obligations of platform providers, have accelerated greatly in recent years.

To the degree that companies such as Google and Facebook increasingly operate as media businesses, this would bring aspects of their activities within the regulatory purview of the Australian Communication and Media Authority (ACMA).

The ACCC recommended ACMA should be responsible for brokering a code of conduct governing commercial relationships between the digital platforms and news providers.




Read more:
Consumer watchdog: journalism is in crisis and only more public funding can help


This would give it powers related to copyright enforcement, allow it to monitor how platforms are acting to guarantee the trustworthiness and reliability of news content, and minimise the circulation of “fake news” on their sites.

Overseas, but over here

Companies such as Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.

The capacity to address competition and market dominance issues is limited by the fact real action could only meaningfully occur in their home market of the US.

Australian regulators are going to need to work closely with their counterparts in other countries and regions: the US and the European Union are the two most significant in this regard.The Conversation

Terry Flew, Professor of Communication and Creative Industries, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


Stuart M Bender, Curtin University

The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.

In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.

The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.

In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.

In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.




Read more:
Why news outlets should think twice about republishing the New Zealand mosque shooter’s livestream


Performance crime is about notoriety

There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.

In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.

American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:

didn’t just want to commit murder – he wanted the reward of attention, for having done it.

Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.

Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.

Steve Stephens, for example, linked his murder of a random elderly victim to retribution for his own failed relationship. He shot the stranger point-blank on video. Vester Flanagan’s appalling murder of two journalists seems to have been motivated by his anger at being fired from the same network.

The Christchurch attack was a brutal, planned mass murder of Muslims in New Zealand, but we don’t yet know whether it was about communicating the ideology of a specific group.

Even though it’s easy to identify explicit references to white supremacist ideas, the document is also strewn with confusing and inexplicable internet meme references and red herrings. These could be regarded as trolling attempts to bait the public into interrogating his claims, and magnifying the attention paid to the perpetrator and his gruesome killings.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


How we should respond

While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.

That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.

It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.

Telecommunication service Vodafone moved quickly to block New Zealand users from access to sites that would be likely to distribute the video.

The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.

Simply watching the video can cause trauma

Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.

Watching images like these has the potential to provoke vicarious trauma in viewers. Studies since the September 11 attacks suggest that “distant trauma” can be linked to multiple viewings of distressing media images.

While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.

Rewarding the spectacle

Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.

According to British criminologist Majid Yar, the new media environment has created different genres of performance crime. The performances have increased in intensity, and criminality – from so-called “happy slapping” videos circulated among adolescents, to violent sexual assault videos. The recent attack is a terrifying continuation of this trend, which is predicated on a kind of exhibitionism and desire to be identified as the performer of the violence.

Researcher Jane O’Dea, who has studied the role played by the media environment in school shootings, claims that we exist in:

a society of the spectacle that regularly transforms ordinary people into “stars” of reality television or of websites like Facebook or YouTube.

Perpetrators of performance crime are inspired by the attention that will inevitably result from the online archive they create leading up to, and during, the event.




Read more:
View from The Hill: A truly inclusive society requires political restraint


We all have a role to play

I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.

Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.

But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.

We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.The Conversation

Stuart M Bender, Early Career Research Fellow (Digital aesthetics of violence), Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

It’s time for third-party data brokers to emerge from the shadows



File 20180404 189813 1ihb282.jpg?ixlib=rb 1.1
Personal data has been dubbed the “new oil”, and data brokers are very efficient miners.
Emanuele Toscano/Flickr, CC BY-NC-ND

Sacha Molitorisz, University of Technology Sydney

Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.

Graham Mudd, Facebook’s product marketing director, said in a statement:

We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.

Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.

The invisible industry worth billions

In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.

The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.




Read more:
Facebook data harvesting: what you need to know


These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.

Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.

In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.

A growing concern

Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.

The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.




Read more:
Your online privacy depends as much on your friends’ data habits as your own


Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.

As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.

With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.

More transparency and more respect for users

To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.

Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.

In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.

Time to regulate

Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.

Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.

The ConversationFacebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Why the business model of social media giants like Facebook is incompatible with human rights



File 20180329 189824 1k13qax.jpg?ixlib=rb 1.1
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance.
EPA/Peter DaSilva

Sarah Joseph, Monash University

Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.

Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.

Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.

The good

In some ways, social media has been a boon for human rights – most obviously for freedom of speech.

Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.

But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.

Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.

Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.




Read more:
#MeToo is not enough: it has yet to shift the power imbalances that would bring about gender equality


The bad and the ugly

But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.

The UN recently found that Facebook had been a major platform for spreading hatred against the Rohingya in Myanmar, which in turn led to ethnic cleansing and crimes against humanity.

Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.

YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.

The business model and human rights

Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.

Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.

Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.

Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.

In late 2017, Facebook was reported as having more than 2.2 billion active users.
EPA/Ritchie B. Tongo

Power through knowledge

In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.

So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.

It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.




Read more:
Can Facebook influence an election result?


Microtargeting

A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.

This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.

Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.

Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.

While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.

Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.

As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.

‘Fake news’

Finally, there is the issue of the spread of misinformation.

While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.

In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.

Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.

Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.

The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.

However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:

… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.

Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:

… false news spreads more than the truth because humans, not robots, are more likely to spread it.

The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.

Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.

Fake news disseminated by social media is argued to have played a role in electing Donald Trump to the presidency.
EPA/Jim Lo Scalzo

What next?

It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).

Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.

Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.

However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.

The ConversationAt the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.

Sarah Joseph, Director, Castan Centre for Human Rights Law, Monash University

This article was originally published on The Conversation. Read the original article.

Why social media is in the doghouse for both the pollies and the public


Denis Muller, University of Melbourne

The Labor Party’s recent decision to ban its candidates from using their own social media accounts as publicity platforms at the next federal election may be a sign that society’s infatuation with social media as a source of news and information is cooling.

Good evidence for this emerged recently with the publication of the 2018 findings from the Edelman Trust Barometer. The annual study has surveyed more than 33,000 people across the globe about how much trust they have in institutions, including government, media, businesses and NGOs.

This year, there was a sharp increase in trust in journalism as a source of news and information, and a decline in trust in social media and search engines for this purpose. Globally, trust in journalism rose five points to 59%, while trust in social media and search engines fell two points to 51% – a gap of eight points.

In Australia, the level of trust in both was below the global average. But the 17 point gap between them was greater – 52% for journalism and 35% for social media and search engines.




Read more:
Social media is changing the face of politics – and it’s not good news


Consequences of poor social media savvy

Labor’s decision may also reflect a healthy distrust of its candidates’ judgement about how to use social media for political purposes.

Liberal Senator Jim Molan’s recent sharing of an anti-Islamic post by the British right-wing extremist group Britain First on his Facebook account showed how poor some individual judgements can be.

If ever there was a two-edged sword in politics, social media is it. It gives politicians a weapon with which to cut their way past traditional journalistic gatekeepers and reach the public directly, but it also exposes them to public scrutiny with a relentless intensity that previous generations of politicians never had to endure.

//platform.twitter.com/widgets.js

//platform.twitter.com/widgets.js

This intensity comes from two sources: the 24/7 news cycle with the associated nonstop interaction between traditional journalism and social media, and the opportunity that digital technology gives everyone to jump instantaneously into public debate.

So Molan’s stupidity, for example, now attracts criticism from the other side of the world. Brendan Cox, the widower of a British politician, Jo Cox, who was murdered by a man yelling “Britain first”, has weighed in.

The interaction between traditional journalism and social media also means journalists can latch onto stories much more quickly because there are countless pairs of eyes and ears out there tipping them off.




Read more:
Social media can bring down politicians, but can it also make politics better?


The result of this scrutiny is that public figures can never be sure they are off-camera, as it were. This means there has been a significant reduction in their power to control the flow of information about themselves. They are liable to be “on the record” anywhere there is a mic or a smartphone – and may not even know it.

Immigration Minister Peter Dutton is caught on a boom mic quipping about the plight of Pacific Island nations facing rising seas from climate change.

Politics then and now

On Sunday night, the ABC aired part one of the two-part documentary Bob Hawke: The Larrikin and the Leader. In it, Graham Richardson says of Hawke:

He did some appalling things when drunk … He was lucky that he went through an era where he couldn’t be pinged. We didn’t have the internet. We didn’t have mobile phones. Let’s face it, a Bob Hawke today behaving in the same manner would never become prime minister. He’d have been buried long before he got near the parliament.

Would we now think differently of a politician like Bob Hawke if some of his well-documented excesses had been captured and circulated on social media in this way?

Perhaps not. Hawke was of his time, an embodiment of the national mood and of what Australians imagine to be the national larrikin character. He might have thrived.

Hawke is still celebrated for his ability to scull a beer.

With Hawke, what you saw was what you got. So he had a built-in immunity to social media’s particular strength: its capacity to show people up as ridiculous, dishonest or hypocritical.

And his political opponent Malcolm Fraser was, in his later years, adept at using Twitter to criticise the government of one of his Liberal successors as Prime Minister, Tony Abbott.

Yet by exerting the iron discipline for which he was famous, saying exactly what he wanted to say and not a word more, Fraser avoided the pitfalls that the likes of Senator Molan stumble into.

Indeed, US President Donald Trump’s reputation for Twitter gaffes hasn’t hurt his popularity among his base, and is even lauded by some as a mark of authenticity.

//platform.twitter.com/widgets.js

So it is likely that the politicians of the past would not have fared very differently from those of the present. The competent would have adapted and used social media to their advantage; the incompetent would have been shown up for what they are.




Read more:
Why social media may not be so good for democracy


Social platforms under fire

Social media has the potential to strengthen democratic life. It makes all public figures – including journalists – more accountable. But as we have seen, especially in the 2016 US presidential elections, it can also be used to weaken democratic life by amplifying the spread of false information.

//platform.twitter.com/widgets.js

As a result, democracies everywhere are wrestling with the overarching problem of how to make the giant social media platforms, especially Facebook, accountable for how they use their publishing power.

The ConversationOut of all this, one trend seems clear: where news and information is concerned, society is no longer dazzled by the novelty of social media and is wakening to its weaknesses.

Denis Muller, Senior Research Fellow in the Centre for Advancing Journalism, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Socially mediated terrorism poses devilish dilemma for social responses


Claire Smith, Flinders University; Gary Jackson, Flinders University, and Koji Mizoguchi, Kyushu University

The terrorist attacks in Paris have resonated around the world. In addition to physical violence, Islamic State (IS) is pursuing a strategy of socially mediated terrorism. The symbolic responses of its opponents can be predicted and may inadvertently further its aims.

In the emotion of the moment, we need to act. We need to be cautious, however, of symbolic reactions that divide Muslims and non-Muslims. We need emblems that act against the xenophobia that is a recruiting tool for jihadists.

Reactions from the West should not erode the Muslim leadership that is essential to overturning “Islamic State”. Queen Rania of Jordan points out:

What the extremists want is to divide our world along fault lines of religion and culture, and so a lot of people in the West may have stereotypes against Arabs and Muslims. But really this fight is a fight between the civilised world and a bunch of crazy people who want to take us back to medieval times. Once we see it that way, we realise that this is about all of us coming together to defend our way of life.

Queen Rania’s statement characterises the Paris attacks as part of a wider conflict around cultural values. How are these values playing out symbolically across the globe?

Propaganda seeks predictable responses

IS’s socially mediated propaganda is sophisticated and planned. This supports an argument that the Paris attacks are the beginning of a global campaign. Symbolic materials characterise IS as invincible. However, other evidence may indicate that it is weak.

The IS representation of the Eiffel Tower.
SITE Intelligence Group

The spontaneous celebration on Twitter by IS supporters was predictable. Its representational coverage of the Paris attacks, however, suggests deep planning.

This planning is embedded in professionally designed images. A reworked image depicts the Eiffel Tower as a triumphal arch with the IS flag flying victoriously on top.

The tower is illuminated and points to the heavens and a God-given victory. The inclusion of a road running through the Eiffel Tower provides a sense of speed, change, even progress. In Arabic, the text states, “We are coming, France” and “The state of Khilafa”.

IS is using symbolic representations of the Paris attacks to garner new recruits.

A sophisticated pre-prepared image of an intrepid fighter walking away from a Paris engulfed in flames was quickly distributed. It is inscribed with the word “France under fire” in Arabic and French.

IS had its ‘France under fire’ image ready to post immediately after the attacks.
INSITE on Terrorism

InFAMOUS
IGN Entertainment Games

This image keys into the heroic tropes of online video gaming, such as prototype and inFAMOUS. Chillingly, it is designed to turn virtual warriors into actual warriors.

The five million young Muslims in France are particular targets. Among online recruitment materials are videos calling them to join other young French nationals who are with IS.

Prototype
hifisnap

Support for the victims in Paris and for the democratic values of liberty, equality and fraternity are embedded in the blue, white and red lights movement. These lights shone in major cities in the US, Britain, Europe, Australia, New Zealand, China, Japan, Taiwan and South America. The blue, white and red lights also were displayed in Egypt, Saudi Arabia, the UAE and Malaysia.

However, the light displays were seen in few countries with Muslim majorities overall. Such countries are in an invidious position. Display the lights and you may be characterized as a lackey of the West. Don’t display the lights and appear unsympathetic to the victims.

Facebook blue white and red Paris
author provided/courtesy J. Smith

Support also is embedded in a parallel Facebook function that allows members to activate a tri-colour filter. Adapted from a rainbow filter used to support same-sex marriage, this filter attracts those with liberal sentiments.

The question of whether to use the French flag to show sympathy for the victims is invidious at a personal level. Many people find themselves exploited and condemned to poverty by neoliberal economic models. They are put in a difficult position. They feel sympathy for the victims. However, they are bitter about how they are being treated by “the West”, including France.

Perils of an ‘us and them’ mindset

As the blue, white and red activism plays out around the globe, there is a potential for this to transform into a symbolic manifestation of an “us and them” mentality. Such a division would support xenophobic forces, which steer recruits towards IS.

The global impact of the attacks can be related to the iconic status of Paris. The attacks hold a personal dimension for millions of people who have visited this city. They have a sense of “there but for the grace of God, go I”. This emotion echoes responses to the destruction of the World Trade Centre in New York in 2001.

The Japanese and Italian cafes included in the attacks are symbolic targets for their countries. In March 2015, IS spokesman Abu Mohammad al-Adnan stated that the group would attack “Paris, before Rome”. Rome is a target because of its symbolic role as the centre of Christianity. Japan is a target because of its role in coalition forces. It has already suffered the execution of Japanese hostages early in 2015.

In Japan, the cultural reaction has been relatively low key, as part of a strategy of minimising terrorist attention. The blue, white and red lights solidarity received minimal press coverage. There have been few reports of the Japanese restaurant that was one of the targets. In addition to factual coverage of the attacks, Japanese reports have concentrated on implications for security at the 2020 Summer Olympics in Tokyo.

Are there any symbols indicating good news? The Syrian passport found near the body of one of the attackers could be a sign of weakness. It could have been “planted” there – why carry a passport on a suicide mission?

If so, its purpose is to increase European xenophobia and encourage the closing of borders to Syrian refugees. This suggests the mass exodus of Muslim refugees from Syria is hurting IS. The propaganda could be a sign of alarm in IS leadership ranks.

In our responses to the Paris attacks, the grief of the West should not be allowed to overshadow the opprobrium of Muslim countries. Muslims are best placed to challenge the Islamic identity of this self-declared state.

As Queen Rania states, the war against IS must be led by Muslims and Arabs. To ensure success, the international community needs to support, not lead, Muslim efforts.

The Conversation

Claire Smith, Professor of Archaeology, Flinders University; Gary Jackson, Research Associate in Archaeology, Flinders University, and Koji Mizoguchi, Professor of Archaeology, Kyushu University

This article was originally published on The Conversation. Read the original article.

How social media was key to Islamic State’s attacks on Paris


Robyn Torok, Edith Cowan University

While the average person was getting on with life in Paris before last Friday’s terror bombings and shootings, Twitter threads in Arabic from the Middle East were urging for attacks to be launched upon coalition forces in their home countries.

“Advance, advance – forward, forward” they said, regarding Paris.

Iraqi forces had warned coalition countries one day before the attack that IS’s leader Abu Bakr al-Baghdadi had called for “[…] bombings or assassinations or hostage taking in the coming days”.

In addition, social media message “Telegrams” from The Islamic State Media Center’s Al-Hayat were telling that something more sinister may be afloat, or at least in the works.

In late September, 2015, IS made use of the new “Channels” tool, on Telegrams, setting up its very own channel called Nashir, which translates as “distributor” in English.

Hiding in privacy

Telegram is an app, launched in 2013, that can be set up on almost any device and allows messages to be sent to users, with a strong focus on privacy.

Telegram is one of many messaging applications that can be used to send private messages.
Telegram

IS utilises the service of Telegram channels because it is more difficult for security agencies to monitor and disrupt than other platforms such as Twitter or Facebook.

An important tool that agencies use to tackle violent extremism is that of counter-narratives. The aim here is address and challenge propaganda and misinformation being disseminated by IS to potential recruits or IS sympathisers.

This is used as a form of disruption to the flow of information and recruitment process. But with Telegrams – since information moves in one direction – it makes it harder to counter jihad propaganda and lies.

Telegrams is used by IS to not just post propaganda, but to spread training manuals, advice on how to obtain and import weapons, how to make bombs and how to perform single jihadi attacks on individuals with household equipment.

It has posts on launching attacks at soft targets and the activation of lone-wolf style attacks, or give the green light for small terrorists pockets or cells within the community to conduct their onslaught.

Inciting acts of violence is a key element of IS’s radical religious ideology. It mandates that its people are following the “true” path of Allah and are helping to bring to pass a great apocalyptic battle between coalition forces and “Rome”, which to them is the will of Allah.

Social media advantage

Social media is prominent in recruitment strategies used by terrorist groups, in particular, IS.

Facebook is a key platform to gather young fans, supporters and recruits to incite them to acts of violence by the means of propaganda and the use of Islamic grievance.

When it comes to real-time orchestrating of terror events, IS is adopting encrypted messaging applications – including Kik, Surespot, Wickr and Telegram, as previously mentioned – that are very difficult to compromise or even hack.

What is advantageous for IS is that messages being sent have what is termed a “burn time” which means they will be deleted after a certain time and will not show up on a phone or other device.

This benefits recruiters as it means they can fly under the radar more readily which makes it more difficult for agencies to detect and prevent attacks.

Also, IS is using the PlayStation 4 network to recruit and plan attacks. Belgium’s deputy prime minister and minister of security and home affairs, Jan Jambon, said PlayStation4 was more difficult for authorities to monitor than WhatsApp and other applications.

After the Paris attacks

Not long after the attacks in Paris, IS released an audio and written statement claiming the attack as its own from command central. This was systematically and widely broadcast across social media platforms.

Contained in this statement were future warnings that “[…] this is just the beginning of attacks […]”. At the same time, a propaganda video entitled “What are you waiting for?” was circulated on Facebook, Twitter and Telegrams.

IS continues to use social media as part of its terror campaign. Its aim is to maintain the focus of its recruits and fighters within coalition countries. It also aims to further recruit home-grown jihadists to acts of violence while driving fear into the heartland of European and Western countries.

While privacy is something on everyone’s mind, encryption applications have gained much momentum to allow people to communicate without worrying about unwanted third party access.

Unfortunately, terrorists have also utilised these features as a means to go undetected in organising real-time operations and preparation for terrorist attacks.

Terrorists are ahead of the A-game and we don’t want to be playing continual catch-up. If terrorists are to continue using these applications to arrange acts of terrorism in a covert manner, then security agencies need to be able to balance the collection of information from technological advanced services with that of human intelligence.

Dealing with the threat of misuse of encrypted applications by IS and other terror organisations, would mean that law enforcement and agencies would require access to encrypted communications. While one could argue this may compromise data security and that it should also be assessed alongside internet vulnerabilities, this must be balanced against the current climate of security threat both domestically and internationally.

The Conversation

Robyn Torok, PhD Candidate, Security Research Institute, Edith Cowan University

This article was originally published on The Conversation. Read the original article.

Digital seduction of jihad: Social media delivering militants’ message and driving recruitment


Social Media: Gaining Control


The following articles are the first two in a series concerning the very difficult task of controlling social media and the various social networks/web applications that you use. There are of course various strategies that can be used and usually some customised form that suits your own situation is generally the way to go. I find myself constantly adapting what I do to meet my current situation. Sometimes the strategy works for a while before breaking down, while at other times the strategy doesn’t appear to get me too far at all.

Perhaps what is outlined in the various articles below will be a help to you. Some of what is mentioned in the articles are strategies I already use and that sucessfully, but not owning a smartphone means I am limited to using various web applications and strategies, so they won’t all work for me.

For more, visit:
http://buckontech.blogspot.com.au/2012/02/2012-year-to-get-control-of-your-social.html
http://buckontech.blogspot.com.au/2012/02/5-must-have-tools-for-managing-social.html