Is social media damaging to children and teens? We asked five experts



They need to have it to fit in, but social media is probably doing teens more harm than good.
from http://www.shutterstock.com

Alexandra Hansen, The Conversation

If you have kids, chances are you’ve worried about their presence on social media.

Who are they talking to? What are they posting? Are they being bullied? Do they spend too much time on it? Do they realise their friends’ lives aren’t as good as they look on Instagram?

We asked five experts if social media is damaging to children and teens.

Four out of five experts said yes

The four experts who ultimately found social media is damaging said so for its negative effects on mental health, disturbances to sleep, cyberbullying, comparing themselves with others, privacy concerns, and body image.

However, they also conceded it can have positive effects in connecting young people with others, and living without it might even be more ostracising.

The dissident voice said it’s not social media itself that’s damaging, but how it’s used.

Here are their detailed responses:


If you have a “yes or no” health question you’d like posed to Five Experts, email your suggestion to: alexandra.hansen@theconversation.edu.au


Karyn Healy is a researcher affiliated with the Parenting and Family Support Centre at The University of Queensland and a psychologist working with schools and families to address bullying. Karyn is co-author of a family intervention for children bullied at school. Karyn is a member of the Queensland Anti-Cyberbullying Committee, but not a spokesperson for this committee; this article presents only her own professional views.The Conversation

Alexandra Hansen, Chief of Staff, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Twitter is banning political ads – but the real battle for democracy is with Facebook and Google



Twitter should get credit for its sensible move, but the microblogging company is tiny compared to Facebook and Google.
Shutterstock

Johan Lidberg, Monash University

Finally, some good news from the weirdo-sphere that is social media. Twitter CEO Jack Dorsey has announced that, effective November 22, the microblogging platform will ban all political advertising – globally.

This is a momentous move by Twitter. It comes when Facebook and its CEO Mark Zuckerberg are under increasing pressure to deal with the amount of mis- and disinformation published via paid political advertising on Facebook.

Zuckerberg recently told a congress hearing Facebook had no plans of fact-checking political ads, and he did not answer a direct question from Congresswoman Alexandria Ocasio-Cortez if Facebook would take down political ads found to be untrue. Not a good look.

A few days after Zuckerberg’s train wreck appearance before the congress committee, Twitter announced its move.




Read more:
Merchants of misinformation are all over the internet. But the real problem lies with us


While Twitter should get credit for its sensible move, the microblogging company is tiny compared to Facebook and Google. So, until the two giants change, Twitter’s political ad ban will have little effect on elections around the globe.

A symptom of the democratic flu

It’s important to call out Google on political advertising. The company often manages to fly under the radar on this issue, hiding behind Facebook, which takes most of the flack.

The global social media platforms are injecting poison into liberal democratic systems around the globe. The misinformation and outright lies they allow to be published on their platforms is partly responsible for the increasingly bitter deep partisan divides between different sides of politics in most mature liberal democracies.

Add to this the micro targeting of voters illustrated by the Cambridge Analytica scandal, and a picture emerges of long-standing democratic systems under extreme stress. This is clearly exemplified by the UK parliament’s paralysis over Brexit and the canyon-deep political divides in the US.




Read more:
Why you should talk to your children about Cambridge Analytica


Banning political advertising only deals with a symptom of the democratic flu the platforms are causing. The root cause of the flu is the fact social media platforms are no longer only platforms – they are publishers.

Until they acknowledge this and agree to adhere to the legal and ethical frameworks connected with publishing, our democracies will not recover.

Not platforms, but publishers

Being a publisher is complex and much more expensive than being a platform. You have to hire editorial staff (unless you can create algorithms advanced enough to do editorial tasks) to fact-check, edit and curate content. And you have to become a good corporate citizen, accepting you have social responsibilities.

Convincing the platforms to accept their publisher role is the most long-term and sustainable way of dealing with the current toxic content issue.

Accepting publisher status could be a win-win, where the social media companies rebuild trust with the public and governments by acting ethcially and socially responsibly, stopping the poisoning of our democracies.

Mark Zuckerberg claims Facebook users being able to publish lies and misinformation is a free speech issue. It is not. Free speech is a privilege as well as a right and, like all privileges, it comes with responsibilities and limitations.

Examples of limitations are defamation laws and racial vilification and discrimination laws. And that’s just the legal framework. The strong ethical frame work that applies to publishing should be added to this.

Ownership concentration like never before

Then, there’s the global social media oligopoly issue. Never before in recorded human history have we seen any industry achieve a level of ownership concentration displayed by the social media companies. This is why this issue is so deeply serious. It’s global, it reaches billions and the money and profits involved is staggering.




Read more:
The fightback against Facebook is getting stronger


Facebook co-founder, Chris Hughes, got it absolutely right when he in his New York Times article pointed out the Federal Trade Commission – the US equivalent to the Australian Competition and Consumer Commission – got it wrong when they allowed Facebook to buy Instagram and WhatsApp.

Hughes wants Facebook broken up and points to the attempts from parts of US civil society moving in this direction. He writes:

This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.

Yesterday, I posted on my Facebook timeline for the first time since the Cambridge Analytica scandal broke. I made the point that after Twitter’s announcement, the ball is now squarely in Facebook’s and Google’s courts.

For research and professional reasons, I cannot delete my Facebook account. But I can pledge to not be an active Facebook user until the company grows up and shoulders its social responsibility as an ethical publisher that enhances our democracies instead of undermining them.The Conversation

Johan Lidberg, Associate Professor, School of Media, Film and Journalism, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Anti-vaccination mothers have outsized voice on social media – pro-vaccination parents could make a difference


Vaccinations are important to protect against a host of diseases.
www.shutterstock.com

Brooke W. McKeever, University of South Carolina and Robert McKeever, University of South Carolina

A high school student from Ohio made national headlines recently by getting inoculated despite his family’s anti-vaccination beliefs.

Ethan Lindenberger, 18, who never had been vaccinated, had begun to question his parents’ decision not to immunize him. He went online to research and ask questions, posting to Reddit, a social discussion website, about how to be vaccinated. His online quest went viral.

In March 2019, he was invited to testify before a U.S. Senate Committee hearing on vaccines and preventable disease outbreaks. In his testimony, he said that his mother’s refusal to vaccinate him was informed partly by her online research and the misinformation about vaccines she found on the web.

Lindenberger’s mother is hardly alone. Public health experts have blamed online anti-vaccination discussions in part for New York’s worst measles outbreak in 30 years. Anti-vaccine activists also have been cited for the growth of anti-vaccination sentiments in the U.S. and abroad.

We are associate professors who study health communication. We are also parents who read online vaccination-related posts, and we decided to conduct research to better understand people’s communication behaviors related to childhood vaccinations. Our research examined the voices most central to this discussion online, mothers, and our findings show that those who oppose vaccinations communicate most about this issue.

What prompts mothers to speak out

A strong majority of parents in the U.S. support vaccinations, yet at the same time, anti-vaccination rates in the U.S. and globally are rising. The World Health Organization identified the reluctance or refusal to vaccinate despite the availability of vaccines as one of 10 top threats to global health in 2019.

Mothers are critical decision-makers in determining whether their children should be vaccinated. In our study, we surveyed 455 mothers online to determine who communicates most about vaccinations and why.

In general, previous research has shown that people evaluate opinion climates – what the majority opinion seems to say – before expressing their own ideas about issues. This is true particularly on controversial subjects such as affirmative action, abortion or immigration. If an individual perceives their opinion to be unpopular, they may be less likely to say what they think, especially if an issue receives a lot of media attention, a phenomenon known as the spiral of silence.

If individuals, however, have strong beliefs about an issue, they may express their opinions whether they are commonly held or minority perspectives. These views can dominate conversations as others online find support for their views and join in.

Our recent study found that mothers who contributed information online shared several perspectives. Mothers who didn’t strongly support childhood vaccinations were more likely to seek, pay attention to, forward information and speak out about the issue – compared to those who do support childhood vaccinations.

Those who believed that vaccinations were an important issue (whether they were for or against them) were more likely to express an opinion. And those who opposed vaccinations were more likely to post their beliefs online.

Ethan Lindenberger testifies before a congressional committee about his decision to be vaccinated against his family’s wishes.
AP Photo/Carolyn Kaster

How social media skews facts

Online news content can be influenced by social media information that millions of people read, and it can amplify minority opinions and health myths. For example, Twitter and Reddit posts related to the vaccine-autism myth can drive news coverage.

Those who expressed online opinions about vaccinations also drove news coverage. Other research we co-authored shows that posts related to the vaccine-autism myth were followed by online news stories related to tweets in the U.S., Canada and the U.K.

Recent reports about social media sites, such as Facebook, trying to interrupt false health information from spreading can help correct public misinformation. However, it is unclear what types of communication will counter misinformation and myths that are repeated and reinforced online.

Countering skepticism

Our work suggests that those who agree with the scientific facts about vaccination may not feel the need to pay attention to this issue or voice their opinions online. They likely already have made up their minds and vaccinated their children.

But from a health communication perspective, it is important that parents who support vaccination voice their opinions and experiences, particularly in online environments.

Studies show that how much parents trust or distrust doctors, scientists or the government influences where they land in the vaccination debate. Perspectives of other parents also provide a convincing narrative to understand the risks and benefits of vaccination.

Scientific facts and messaging about vaccines, such as information from organizations like the World Health Organization and the Centers for Disease Control and Prevention, are important in the immunization debate.

But research demonstrates that social consensus, informed in part by peers and other parents, is also an effective element in conversations that shape decisions.

If mothers or parents who oppose or question vaccinations continue to communicate, while those who support vaccinations remain silent, a false consensus may grow. This could result in more parents believing that a reluctance to vaccinate children is the norm – not the exception.

[ Expertise in your inbox. Sign up for The Conversation’s newsletter and get a digest of academic takes on today’s news, every day. ]The Conversation

Brooke W. McKeever, Associate Professor, University of South Carolina and Robert McKeever, Associate Professor, University of South Carolina

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australian media regulators face the challenge of dealing with global platforms Google and Facebook



‘Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.’
Shutterstock/Roman Pyshchyk

Terry Flew, Queensland University of Technology

With concerns growing worldwide about the economic power of digital technology giants such as Google and Facebook, there was plenty of interest internationally in Australia’s Digital Platforms Inquiry.

The Australian Competition and Consumer Commission (ACCC) inquiry was seen as undertaking a forensic account of market dominance by digital platforms, and the implications for Australian media and the rights of citizens around privacy and data protection.

The inquiry’s final report, released last month, has been analysed from perspectives such as competition policy, consumer protection and the future of journalism.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


But the major limitation facing the ACCC, and the Australian government, in developing new regulations for digital platforms is jurisdictional authority – given these companies are headquartered in the United States.

More ‘platform neutral’ approach

Among the ACCC’s 23 recommendations is a proposal to reform media regulations to move from the current platform-specific approaches (different rules for television, radio, and print media) towards a “platform-neutral” approach.

This will ensure comparable functions are effectively and consistently regulated:

Digitalisation and the increase in online sources of news and media content highlight inconsistencies in the current sector-specific approach to media regulation in Australia […]

Digital platforms increasingly perform similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online. Despite this, virtually no media regulation applies to digital platforms.

The ACCC’s recommendations to harmonise regulations across different types of media draw on major Australian public enquiries from the early 2010s, such as the Convergence Review and the Australian Law Reform Commission’s review of the national media classification system. These reports identified the inappropriateness of “silo-ised” media laws and regulations in an age of digital convergence.




Read more:
What Australia’s competition boss has in store for Google and Facebook


The ACCC also questions the continued appropriateness of the distinction between platforms and publishers in an age where the largest digital platforms are not simply the carriers of messages circulated among their users.

The report observes that such platforms are increasingly at the centre of digital content distribution. Online consumers increasingly access social news through platforms such as Facebook and Google, as well as video content through YouTube.

The advertising dollar

While the ACCC inquiry focused on the impact of digital platforms on news, we can see how they have transformed the media landscape more generally, and where issues of the wider public good arise.

Their dominance over advertising has undercut traditional media business models. Online now accounts for about 50% of total advertising spend, and the ACCC estimates that 71 cents of every dollar spent on digital advertising in Australia goes to Google or Facebook.

All media are now facing the implications of a more general migration to online advertising, as platforms can better micro-target consumers rather than relying on the broad brush approach of mass media advertising.

The larger issue facing potential competitors to the digital giants is the accumulation of user data. This includes the lack of transparency around algorithmic sorting of such data, and the capacity to use machine learning to apply powerful predictive analytics to “big data”.

In line with recent critiques of platform capitalism, the ACCC is concerned about the lack of information consumers have about what data the platforms hold and how it’s being used.

It’s also concerned the “winner-takes-most” nature of digital markets creates a long term structural crisis for media businesses, with particularly severe implications for public interest journalism.

Digital diversity

Digital platform companies do not sit easily within a recognisable industry sector as they branch across information technology, content media, and advertising.

They’re also not alike. While all rely on the capacity to generate and make use of consumer data, their business models differ significantly.

The ACCC chose to focus only on Google and Facebook, but they are quite different entities.

Google dominates search advertising and is largely a content aggregator, whereas Facebook for the most part provides display advertising that accompanies user-generated social media. This presents its own challenges in crafting a regulatory response to the rise of these digital platform giants.

A threshold issue is whether digital platforms should be understood to be media businesses, or businesses in a more generic sense.

Communications policy in the 1990s and 2000s commonly differentiated digital platforms as carriers. This indemnified them from laws and regulations relating to content that users uploaded onto their sites.

But this carriage/content distinction has always coexisted with active measures on the part of the platform companies to manage content that is hosted on their sites. Controversies around content moderation, and the legal and ethical obligations of platform providers, have accelerated greatly in recent years.

To the degree that companies such as Google and Facebook increasingly operate as media businesses, this would bring aspects of their activities within the regulatory purview of the Australian Communication and Media Authority (ACMA).

The ACCC recommended ACMA should be responsible for brokering a code of conduct governing commercial relationships between the digital platforms and news providers.




Read more:
Consumer watchdog: journalism is in crisis and only more public funding can help


This would give it powers related to copyright enforcement, allow it to monitor how platforms are acting to guarantee the trustworthiness and reliability of news content, and minimise the circulation of “fake news” on their sites.

Overseas, but over here

Companies such as Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.

The capacity to address competition and market dominance issues is limited by the fact real action could only meaningfully occur in their home market of the US.

Australian regulators are going to need to work closely with their counterparts in other countries and regions: the US and the European Union are the two most significant in this regard.The Conversation

Terry Flew, Professor of Communication and Creative Industries, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Anxieties over livestreams can help us design better Facebook and YouTube content moderation



File 20190319 60995 19te2fg.jpg?ixlib=rb 1.1
Livestream on Facebook isn’t just a tool for sharing violence – it has many popular social and political uses.
glen carrie / unsplash, CC BY

Andrew Quodling, Queensland University of Technology

As families in Christchurch bury their loved ones following Friday’s terrorist attack, global attention now turns to preventing such a thing ever happening again.

In particular, the role social media played in broadcasting live footage and amplifying its reach is under the microscope. Facebook and YouTube face intense scrutiny.




Read more:
Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


New Zealand’s Prime Minister Jacinda Ardern has reportedly been in contact with Facebook executives to press the case that the footage should not available for viewing. Australian Prime Minister Scott Morrison has called for a moratorium on amateur livestreaming services.

But beyond these immediate responses, this terrible incident presents an opportunity for longer term reform. It’s time for social media platforms to be more open about how livestreaming works, how it is moderated, and what should happen if or when the rules break down.

Increasing scrutiny

With the alleged perpetrator apparently flying under the radar prior to this incident in Christchurch, our collective focus is now turned to the online radicalisation of young men.

As part of that, online platforms face increased scrutiny and Facebook and Youtube have drawn criticism.

After dissemination of the original livestream occurred on Facebook, YouTube became a venue for the re-upload and propagation of the recorded footage.

Both platforms have made public statements about their efforts at moderation.

YouTube noted the challenges of dealing with an “unprecedented volume” of uploads.

Although it’s been reported less than 4000 people saw the initial stream on Facebook, Facebook said:

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload […]

Focusing chiefly on live-streaming is somewhat reductive. Although the shooter initially streamed his own footage, the greater challenge of controlling the video largely relates to two issues:

  1. the length of time it was available on Facebook’s platform before it was removed
  2. the moderation of “mirror” video publication by people who had chosen to download, edit, and re-upload the video for their own purposes.

These issues illustrate the weaknesses of existing content moderation policies and practices.

Not an easy task

Content moderation is a complex and unenviable responsibility. Platforms like Facebook and YouTube are expected to balance the virtues of free expression and newsworthiness with socio-cultural norms and personal desires, as well as the local regulatory regimes of the countries they operate in.

When platforms perform this responsibility poorly (or, utterly abdicate it) they pass on the task to others — like the New Zealand Internet Service Providers that blocked access to websites that were re-distributing the shooter’s footage.

People might reasonably expect platforms like Facebook and YouTube to have thorough controls over what is uploaded on their sites. However, the companies’ huge user bases mean they often must balance the application of automated, algorithmic systems for content moderation (like Microsoft’s PhotoDNA, and YouTube’s ContentID) with teams of human moderators.




Read more:
A guide for parents and teachers: what to do if your teenager watches violent footage


We know from investigative reporting that the moderation teams at platforms like Facebook and YouTube are tasked with particularly challenging work. They seem to have a relatively high turnover of staff who are quickly burnt-out by severe workloads while moderating the worst content on the internet. They are supported with only meagre wages, and what could be viewed as inadequate mental healthcare.

And while some algorithmic systems can be effective at scale, they can also be subverted by competent users who understand aspects of their methodology. If you’ve ever found a video on YouTube where the colours are distorted, the audio playback is slightly out of sync, or the image is heavily zoomed and cropped, you’ve likely seen someone’s attempt to get around ContentID algorithms.

For online platforms, the response to terror attacks is further complicated by the difficult balance they must strike between their desire to protect users from gratuitous or appalling footage with their commitment to inform people seeking news through their platform.

We must also acknowledge the other ways livestreaming features in modern life. Livestreaming is a lucrative niche entertainment industry, with thousands of innocent users broadcasting hobbies with friends from board games to mukbang (social eating), to video games. Livestreaming is important for activists in authoritarian countries, allowing them to share eyewitness footage of crimes, and shift power relationships. A ban on livestreaming would prevent a lot of this activity.

We need a new approach

Facebook and YouTube’s challenges in addressing the issue of livestreamed hate crimes tells us something important. We need a more open, transparent approach to moderation. Platforms must talk openly about how this work is done, and be prepared to incorporate feedback from our governments and society more broadly.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


A good place to start is the Santa Clara principles, generated initially from a content moderation conference held in February 2018 and updated in May 2018. These offer a solid foundation for reform, stating:

  1. companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  2. companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension
  3. companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

A more socially responsible approach to platforms’ roles as moderators of public discourse necessitates a move away from the black-box secrecy platforms are accustomed to — and a move towards more thorough public discussions about content moderation.

In the end, greater transparency may facilitate a less reactive policy landscape, where both public policy and opinion have a greater understanding around the complexities of managing new and innovative communications technologies.The Conversation

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media create a spectacle society that makes it easier for terrorists to achieve notoriety


Stuart M Bender, Curtin University

The shocking mass-shooting in Christchurch on Friday is notable for using livestreaming video technology to broadcast horrific first-person footage of the shooting on social media.

In the highly disturbing video, the gunman drives to the Masjid Al Noor mosque, walks inside and shoots multiple people before leaving the scene in his car.

The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a disgusting trophy for the perpetrator to re-watch later.

In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays, it’s much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves.

In an era of social media, which is driven in large part by spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks.




Read more:
Why news outlets should think twice about republishing the New Zealand mosque shooter’s livestream


Performance crime is about notoriety

There is a tragic and recent history of performance crime videos that use livestreaming and social media video services as part of their tactics.

In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was livestreamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and livestreamed.

American journalist Gideon Lichfield wrote of the 2015 incident, that the killer:

didn’t just want to commit murder – he wanted the reward of attention, for having done it.

Performance crimes can be distinguished from the way traditional terror attacks and propaganda work, such as the hyper-violent videos spread by ISIS in 2014.

Typical propaganda media that feature violence use a dramatic spectacle to raise attention and communicate the group’s message. But the perpetrators of performance crimes often don’t have a clear ideological message to convey.

Steve Stephens, for example, linked his murder of a random elderly victim to retribution for his own failed relationship. He shot the stranger point-blank on video. Vester Flanagan’s appalling murder of two journalists seems to have been motivated by his anger at being fired from the same network.

The Christchurch attack was a brutal, planned mass murder of Muslims in New Zealand, but we don’t yet know whether it was about communicating the ideology of a specific group.

Even though it’s easy to identify explicit references to white supremacist ideas, the document is also strewn with confusing and inexplicable internet meme references and red herrings. These could be regarded as trolling attempts to bait the public into interrogating his claims, and magnifying the attention paid to the perpetrator and his gruesome killings.




Read more:
Christchurch attacks are a stark warning of toxic political environment that allows hate to flourish


How we should respond

While many questions remain about the attack itself, we need to consider how best to respond to performance crime videos. Since 2012, many academics and journalists have argued that media coverage of mass violence should be limited to prevent the reward of attention from potentially driving further attacks.

That debate has continued following the tragic events in New Zealand. Journalism lecturer Glynn Greensmith argued that our responsibility may well be to limit the distribution of the Christchurch shooting video and manifesto as much as possible.

It seems that, in this case, social media and news platforms have been more mindful about removing the footage, and refusing to rebroadcast it. The video was taken down within 20 minutes by Facebook, which said that in the first 24 hours it removed 1.5 million videos of the attack globally.

Telecommunication service Vodafone moved quickly to block New Zealand users from access to sites that would be likely to distribute the video.

The video is likely to be declared objectionable material, according to New Zealand’s Department of Internal Affairs, which means it is illegal to possess. Many are calling on the public not to share it online.

Simply watching the video can cause trauma

Yet the video still exists, dispersed throughout the internet. It may be removed from official sites, but its online presence is maintained via re-uploads and file-sharing sites. Screenshots of the videos, which frequently appear in news reports, also inherit symbolic and traumatic significance when they serve as visual reminders of the distressing event.

Watching images like these has the potential to provoke vicarious trauma in viewers. Studies since the September 11 attacks suggest that “distant trauma” can be linked to multiple viewings of distressing media images.

While the savage violence of the event is distressing in its own right, this additional potential to traumatise people who simply watch the video is something that also plays into the aims of those committing performance crimes in the name of terror.

Rewarding the spectacle

Platforms like Facebook, Instagram and YouTube are powered by a framework that encourages, rewards and creates performance. People who post cat videos cater to this appetite for entertainment, but so do criminals.

According to British criminologist Majid Yar, the new media environment has created different genres of performance crime. The performances have increased in intensity, and criminality – from so-called “happy slapping” videos circulated among adolescents, to violent sexual assault videos. The recent attack is a terrifying continuation of this trend, which is predicated on a kind of exhibitionism and desire to be identified as the performer of the violence.

Researcher Jane O’Dea, who has studied the role played by the media environment in school shootings, claims that we exist in:

a society of the spectacle that regularly transforms ordinary people into “stars” of reality television or of websites like Facebook or YouTube.

Perpetrators of performance crime are inspired by the attention that will inevitably result from the online archive they create leading up to, and during, the event.




Read more:
View from The Hill: A truly inclusive society requires political restraint


We all have a role to play

I have previously argued that this media environment seems to produce violent acts that otherwise may not have occurred. Of course, I don’t mean that the perpetrators are not responsible or accountable for their actions. Rather, performance crime represents a different type of activity specific to the technology and social phenomenon of social media – the accidental dark side of livestreaming services.

Would the alleged perpetrator of this terrorist act in Christchurch still have committed it without the capacity to livestream? We don’t know.

But as Majid Yar suggests, rather than concerning ourselves with old arguments about whether media violence can cause criminal behaviour, we should focus on how the techniques and reward systems we use to represent ourselves to online audiences are in fact a central component of these attacks.

We may hope that social media companies will get better at filtering out violent content, but until they do we should reflect on our own behaviour online. As we like and share content of all kinds on social platforms, let’s consider how our activities could contribute to an overall spectacle society that inspires future perpetrator-produced videos of performance crime – and act accordingly.The Conversation

Stuart M Bender, Early Career Research Fellow (Digital aesthetics of violence), Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

It’s time for third-party data brokers to emerge from the shadows



File 20180404 189813 1ihb282.jpg?ixlib=rb 1.1
Personal data has been dubbed the “new oil”, and data brokers are very efficient miners.
Emanuele Toscano/Flickr, CC BY-NC-ND

Sacha Molitorisz, University of Technology Sydney

Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.

Graham Mudd, Facebook’s product marketing director, said in a statement:

We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.

Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.

The invisible industry worth billions

In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.

The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.




Read more:
Facebook data harvesting: what you need to know


These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.

Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.

In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.

A growing concern

Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.

The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.




Read more:
Your online privacy depends as much on your friends’ data habits as your own


Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.

As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.

With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.

More transparency and more respect for users

To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.

Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.

In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.

Time to regulate

Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.

Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.

The ConversationFacebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Why the business model of social media giants like Facebook is incompatible with human rights



File 20180329 189824 1k13qax.jpg?ixlib=rb 1.1
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance.
EPA/Peter DaSilva

Sarah Joseph, Monash University

Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.

Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.

Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.

The good

In some ways, social media has been a boon for human rights – most obviously for freedom of speech.

Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.

But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.

Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.

Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.




Read more:
#MeToo is not enough: it has yet to shift the power imbalances that would bring about gender equality


The bad and the ugly

But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.

The UN recently found that Facebook had been a major platform for spreading hatred against the Rohingya in Myanmar, which in turn led to ethnic cleansing and crimes against humanity.

Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.

YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.

The business model and human rights

Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.

Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.

Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.

Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.

In late 2017, Facebook was reported as having more than 2.2 billion active users.
EPA/Ritchie B. Tongo

Power through knowledge

In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.

So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.

It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.




Read more:
Can Facebook influence an election result?


Microtargeting

A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.

This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.

Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.

Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.

While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.

Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.

As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.

‘Fake news’

Finally, there is the issue of the spread of misinformation.

While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.

In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.

Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.

Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.

The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.

However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:

… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.

Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:

… false news spreads more than the truth because humans, not robots, are more likely to spread it.

The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.

Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.

Fake news disseminated by social media is argued to have played a role in electing Donald Trump to the presidency.
EPA/Jim Lo Scalzo

What next?

It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).

Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.

Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.

However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.

The ConversationAt the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.

Sarah Joseph, Director, Castan Centre for Human Rights Law, Monash University

This article was originally published on The Conversation. Read the original article.

Why social media is in the doghouse for both the pollies and the public


Denis Muller, University of Melbourne

The Labor Party’s recent decision to ban its candidates from using their own social media accounts as publicity platforms at the next federal election may be a sign that society’s infatuation with social media as a source of news and information is cooling.

Good evidence for this emerged recently with the publication of the 2018 findings from the Edelman Trust Barometer. The annual study has surveyed more than 33,000 people across the globe about how much trust they have in institutions, including government, media, businesses and NGOs.

This year, there was a sharp increase in trust in journalism as a source of news and information, and a decline in trust in social media and search engines for this purpose. Globally, trust in journalism rose five points to 59%, while trust in social media and search engines fell two points to 51% – a gap of eight points.

In Australia, the level of trust in both was below the global average. But the 17 point gap between them was greater – 52% for journalism and 35% for social media and search engines.




Read more:
Social media is changing the face of politics – and it’s not good news


Consequences of poor social media savvy

Labor’s decision may also reflect a healthy distrust of its candidates’ judgement about how to use social media for political purposes.

Liberal Senator Jim Molan’s recent sharing of an anti-Islamic post by the British right-wing extremist group Britain First on his Facebook account showed how poor some individual judgements can be.

If ever there was a two-edged sword in politics, social media is it. It gives politicians a weapon with which to cut their way past traditional journalistic gatekeepers and reach the public directly, but it also exposes them to public scrutiny with a relentless intensity that previous generations of politicians never had to endure.

//platform.twitter.com/widgets.js

//platform.twitter.com/widgets.js

This intensity comes from two sources: the 24/7 news cycle with the associated nonstop interaction between traditional journalism and social media, and the opportunity that digital technology gives everyone to jump instantaneously into public debate.

So Molan’s stupidity, for example, now attracts criticism from the other side of the world. Brendan Cox, the widower of a British politician, Jo Cox, who was murdered by a man yelling “Britain first”, has weighed in.

The interaction between traditional journalism and social media also means journalists can latch onto stories much more quickly because there are countless pairs of eyes and ears out there tipping them off.




Read more:
Social media can bring down politicians, but can it also make politics better?


The result of this scrutiny is that public figures can never be sure they are off-camera, as it were. This means there has been a significant reduction in their power to control the flow of information about themselves. They are liable to be “on the record” anywhere there is a mic or a smartphone – and may not even know it.

Immigration Minister Peter Dutton is caught on a boom mic quipping about the plight of Pacific Island nations facing rising seas from climate change.

Politics then and now

On Sunday night, the ABC aired part one of the two-part documentary Bob Hawke: The Larrikin and the Leader. In it, Graham Richardson says of Hawke:

He did some appalling things when drunk … He was lucky that he went through an era where he couldn’t be pinged. We didn’t have the internet. We didn’t have mobile phones. Let’s face it, a Bob Hawke today behaving in the same manner would never become prime minister. He’d have been buried long before he got near the parliament.

Would we now think differently of a politician like Bob Hawke if some of his well-documented excesses had been captured and circulated on social media in this way?

Perhaps not. Hawke was of his time, an embodiment of the national mood and of what Australians imagine to be the national larrikin character. He might have thrived.

Hawke is still celebrated for his ability to scull a beer.

With Hawke, what you saw was what you got. So he had a built-in immunity to social media’s particular strength: its capacity to show people up as ridiculous, dishonest or hypocritical.

And his political opponent Malcolm Fraser was, in his later years, adept at using Twitter to criticise the government of one of his Liberal successors as Prime Minister, Tony Abbott.

Yet by exerting the iron discipline for which he was famous, saying exactly what he wanted to say and not a word more, Fraser avoided the pitfalls that the likes of Senator Molan stumble into.

Indeed, US President Donald Trump’s reputation for Twitter gaffes hasn’t hurt his popularity among his base, and is even lauded by some as a mark of authenticity.

//platform.twitter.com/widgets.js

So it is likely that the politicians of the past would not have fared very differently from those of the present. The competent would have adapted and used social media to their advantage; the incompetent would have been shown up for what they are.




Read more:
Why social media may not be so good for democracy


Social platforms under fire

Social media has the potential to strengthen democratic life. It makes all public figures – including journalists – more accountable. But as we have seen, especially in the 2016 US presidential elections, it can also be used to weaken democratic life by amplifying the spread of false information.

//platform.twitter.com/widgets.js

As a result, democracies everywhere are wrestling with the overarching problem of how to make the giant social media platforms, especially Facebook, accountable for how they use their publishing power.

The ConversationOut of all this, one trend seems clear: where news and information is concerned, society is no longer dazzled by the novelty of social media and is wakening to its weaknesses.

Denis Muller, Senior Research Fellow in the Centre for Advancing Journalism, University of Melbourne

This article was originally published on The Conversation. Read the original article.