Is social media damaging to children and teens? We asked five experts



They need to have it to fit in, but social media is probably doing teens more harm than good.
from http://www.shutterstock.com

Alexandra Hansen, The Conversation

If you have kids, chances are you’ve worried about their presence on social media.

Who are they talking to? What are they posting? Are they being bullied? Do they spend too much time on it? Do they realise their friends’ lives aren’t as good as they look on Instagram?

We asked five experts if social media is damaging to children and teens.

Four out of five experts said yes

The four experts who ultimately found social media is damaging said so for its negative effects on mental health, disturbances to sleep, cyberbullying, comparing themselves with others, privacy concerns, and body image.

However, they also conceded it can have positive effects in connecting young people with others, and living without it might even be more ostracising.

The dissident voice said it’s not social media itself that’s damaging, but how it’s used.

Here are their detailed responses:


If you have a “yes or no” health question you’d like posed to Five Experts, email your suggestion to: alexandra.hansen@theconversation.edu.au


Karyn Healy is a researcher affiliated with the Parenting and Family Support Centre at The University of Queensland and a psychologist working with schools and families to address bullying. Karyn is co-author of a family intervention for children bullied at school. Karyn is a member of the Queensland Anti-Cyberbullying Committee, but not a spokesperson for this committee; this article presents only her own professional views.The Conversation

Alexandra Hansen, Chief of Staff, The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Goodbye Google+, but what happens when online communities close down?



File 20190403 177184 jfjjy0.jpg?ixlib=rb 1.1
Google+ is the latest online community to close.
Shutterstock/rvlsoft

Stan Karanasios, RMIT University

This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.

If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.

Google+ is no longer available to users.
Google+/Screengrab

But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).




Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content


So are we witnessing the decline of online communities and user comments?

Turning off online communities and user generated content

One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.

This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.

Other popular websites have disabled their comments sections, including National Public Radio (NPR), The Atlantic, Popular Science and Reuters.

Why the closures?

Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.

In the case of Google+, low usage alongside data breaches appear to have sped up its decision.

NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.

This led NPR’s managing editor for digital news, Scott Montgomery, to say:

We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.

He said audiences had also moved to engage with NPR more on Facebook and Twitter.

Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.

In the case of IMDB closing its message boards in 2017, the reason given was:

[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.

The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.

User backlash

Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.

But most organisations take these aspects into their decision-making.

The petition to save IMDB’s message boards.
Change.org/Screengrab

For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.

Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.

In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.

This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).

So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.

The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.

It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.

Lost community knowledge

But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.

They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.

Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.The Conversation

Stan Karanasios, Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Livestreaming terror is abhorrent – but is more rushed legislation the answer?



File 20190401 177178 1cjkc2w.jpg?ixlib=rb 1.1
The perpetrator of the Christchurch attacks livestreamed his killings on Facebook.
Shutterstock

Robert Merkel, Monash University

In the wake of the Christchurch attack, the Australian government has announced its intention to create new criminal offences relating to the livestreaming of violence on social media platforms.

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill will create two new crimes:

It will be a criminal offence for social media platforms not to remove abhorrent violent material expeditiously. This will be punishable by 3 years’ imprisonment or fines that can reach up to 10% of the platform’s annual turnover.

Platforms anywhere in the world must notify the Australian Federal Police if they become aware their service is streaming abhorrent violent conduct that is happening in Australia. A failure to do this will be punishable by fines of up to A$168,000 for an individual or A$840,000 for a corporation.

The government is reportedly seeking to pass the legislation in the current sitting week of Parliament. This could be the last of the current parliament before an election is called. Labor, or some group of crossbenchers, will need to vote with the government if the legislation is to pass. But the draft bill was only made available to the Labor Party last night.

This is not the first time that legislation relating to the intersection of technology and law enforcement has been raced through parliament to the consternation of parts of the technology industry, and other groups. Ongoing concerns around the Access and Assistance bill demonstrate the risks of such rushed legislation.




Read more:
China bans streaming video as it struggles to keep up with live content


Major social networks already moderate violence

The government has defined “abhorrent violent material” as:

[…] material produced by a perpetrator, and which plays or livestreams the very worst types of offences. It will capture the playing or streaming of terrorism, murder, attempted murder, torture, rape and kidnapping on social media.

The major social media platforms already devote considerable resources to content moderation. They are often criticised for their moderation policies, and the inconsistent application of those policies. But content fitting the government’s definition is already clearly prohibited by Twitter, Facebook, and Snapchat.

Social media companies rely on a combination of technology, and thousands of people employed as content moderators to remove graphic content. Moderators (usually contractors, often on low wages) are routinely called on to remove a torrent of abhorrent material, including footage of murders and other violent crimes.




Read more:
We need to talk about the mental health of content moderators


Technology is helpful, but not a solution

Technologies developed to assist with content moderation are less advanced than one might hope – particularly for videos. Facebook’s own moderation tools are mostly proprietary. But we can get an idea of the state of the commercial art from Microsoft’s Content Moderator API.

The Content Moderator API is an online service designed to be integrated by programmers into consumer-facing communication systems. Microsoft’s tools can automatically recognise “racy or adult content”. They can also identify images similar to ones in a list. This kind of technology is used by Facebook, in cooperation with the office of the eSafety Comissioner, to help track and block image-based abuse – commonly but erroneously described as “revenge porn”.

The Content Moderator API cannot automatically classify an image, let alone a video, as “abhorrent violent content”. Nor can it automatically identify videos similar to another video.

Technology that could match videos is under development. For example, Microsoft is currently trialling a matching system specifically for video-based child exploitation material.

As well as developing new technologies themselves, the tech giants are enthusiastic adopters of methods and ideas devised by academic researchers. But they are some distance from being able to automatically identify re-uploads of videos that violate their terms of service, particularly when uploaders modify the video to evade moderators. The ability to automatically flag these videos as they are uploaded or streamed is even more challenging.

Important questions, few answers so far

Evaluating the government’s proposed legislative amendments is difficult given that details are scant. I’m a technologist, not a legal academic, but the scope and application of the legislation is currently unclear. Before any legislation is passed, a number of questions need to be addressed – too many to list here, but for instance:

Does the requirement to remove “abhorrent violent material” apply only to material created or uploaded by Australians? Does it only apply to events occurring within Australia? Or could foreign social media companies be liable for massive fines if videos created in a foreign country, and uploaded by a foreigner, were viewed within Australia?

Would attempts to render such material inaccessible from within Australia suffice (even though workarounds are easy)? Or would removal from access anywhere in the world be required? Would Australians be comfortable with a foreign law that required Australian websites to delete content displayed to Australians based on the decisions of a foreign government?




Read more:
Anxieties over livestreams can help us design better Facebook and YouTube content moderation


Complex legislation needs time

The proposed legislation does nothing to address the broader issues surrounding promotion of the violent white supremacist ideology that apparently motivated the Christchurch attacker. While that does not necessarily mean it’s a bad idea, it would seem very far from a full governmental response to the monstrous crime an Australian citizen allegedly committed.

It may well be that the scope and definitional issues are dealt with appropriately in the text of the legislation. But considering the government seems set on passing the bill in the next few days, it’s unlikely lawmakers will have the time to carefully consider the complexities involved.

While the desire to prevent further circulation of perpetrator-generated footage of terrorist attacks is noble, taking effective action is not straightforward. Yet again, the federal government’s inclination seems to be to legislate first and discuss later.The Conversation

Robert Merkel, Lecturer in Software Engineering, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Digital campaigning on sites like Facebook is unlikely to swing the election



File 20190412 44802 mem06u.jpg?ixlib=rb 1.1
Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.
Shutterstock

Glenn Kefford, Macquarie University

With the federal election now officially underway, commentators have begun to consider not only the techniques parties and candidates will use to persuade voters, but also any potential threats we are facing to the integrity of the election.

Invariably, this discussion leads straight to digital.

In the aftermath of the 2016 United States presidential election, the coverage of digital campaigning has been unparalleled. But this coverage has done very little to improve understanding of the key issues confronting our democracies as a result of the continued rise of digital modes of campaigning.

Some degree of confusion is understandable since digital campaigning is opaque – especially in Australia. We have very little information on what political parties or third-party campaigners are spending their money on, some of which comes from taxpayers. But the hysteria around digital is for the most part, unfounded.




Read more:
Chinese social media platform WeChat could be a key battleground in the federal election


Why parties use digital media

In any attempt to better understand digital, it’s useful to consider why political parties and other campaigners are using it as part of their election strategies. The reasons are relatively straightforward.

The media landscape is fragmented. Voters are active on social media platforms, such as Facebook and Instagram, so that’s where the parties need to be.

Compared to the cost of advertising on television, radio or in print, digital advertising is very affordable.

Platforms like Facebook offer services that give campaigners a relatively straightforward way to segment voters. Campaigners can use these tools to micro-target them with tailored messaging.

Voting, persuasion and mobilisation

While there is certainly more research required into digital campaigning, there is no scholarly study I know of that suggests advertising online – including micro-targeted messaging – has the effect that it is often claimed to have.

What we know is that digital messaging can have a small but significant effect on mobilisation, that there are concerns about how it could be used to demobilise voters, and that it is an effective way to fundraise and organise. But its ability to independently persuade voters to change their votes is estimated to be close to zero.




Read more:
Australian political journalists might be part of a ‘Canberra bubble’, but they engage the public too


The exaggeration and lack of clarity around digital is problematic because there is almost no evidence to support many of the claims made. This type of technology fetishism also implies that voters are easily manipulated, when there is little evidence of this.

While it might help some commentators to rationalise unexpected election results, a more fruitful endeavour than blaming technology would be to try to understand why voters are attracted to various parties or candidates, such as Trump in the US.

Digital campaigning is not a magic bullet, so commentators need to stop treating it as if it is. Parties hope it helps them in their persuasion efforts, but this is through layering their messages across as many mediums as possible, and using the network effect that social media provides.

Data privacy and foreign interference

The two clear and obvious dangers related to digital are about data privacy and foreign meddling. We should not accept that our data is shared widely as a result of some box we ticked online. And we should have greater control over how our data are used, and who they are sold to.

An obvious starting point in Australia is questioning whether parties should continue to be exempt from privacy legislation. Research suggests that a majority of voters see a distinction between commercial entities advertising to us online compared to parties and other campaigners.

We also need to take some personal responsibility, since many of us do not always take our digital footprint as seriously as we should. It matters, and we need to educate ourselves on this.

The more vexing issue is that of foreign interference. One of the first things we need to recognise is that it is unlikely this type of meddling online would independently turn an election.

This does not mean we should accept this behaviour, but changing election results is just one of the goals these actors have. Increasing polarisation and contributing to long-term social divisions is part of the broader strategy.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


The digital battleground

As the 2019 campaign unfolds, we should remember that, while digital matters, there is no evidence it has an independent election-changing effect.

Australians should be most concerned with how our data are being used and sold, and about any attempts to meddle in our elections by state and non-state actors.

The current regulatory environment fails to meet community standards. More can and should be done to protect us and our democracy.


This article has been co-published with The Lighthouse, Macquarie University’s multimedia news platform.The Conversation

Glenn Kefford, Senior Lecturer, Department of Modern History, Politics and International Relations, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Four ways social media companies and security agencies can tackle terrorism


Robyn Torok, Edith Cowan University

Prime Minister Malcolm Turnbull has joined Britain’s Prime Minister Theresa May in calling on social media companies to crack down on extremist material being published by users.

It comes in the wake of the recent terror attacks in Australia and Britain.

Facebook is considered a hotbed for terrorist recruitment, incitement, propaganda and the spreading of radical thinking. Twitter, YouTube and encrypted services such WhatsApp and Telegram are also implicated.

Addressing the extent of such content on social media requires international cooperation from large social media platforms themselves and encrypted services.

Some of that work is already underway by many social media operators, with Facebook’s rules on this leaked only last month. Twitter says that in one six-month period it has suspended 376,890 accounts related to the promotion of terrorism.

While these measures are a good start, more can be done. A focus on disruption, encryption, recruitment and creating counter-narratives is recommended.

Disruption: remove content, break flow-on

Disruption of terrorists on social media involves reporting and taking down of radical elements and acts of violence, whether that be radical accounts or posted content that breaches community safety and standards.

This is critical both in timing and eradication.

Disruption is vital for removing extreme content and breaking the flow-on effect while someone is in the process of being recruited by extremists.

Taking down accounts and content is difficult as there is often a large volume of content to remove. Sometimes it is not removed as quickly as needed. In addition, extremists typically have multiple accounts and can operate under various aliases at the same time.

Encryption: security authorities need access

When Islamic extremists use encrypted channels, it makes the fight against terrorism much harder. Extremists readily shift from public forums to encrypted areas, and often work in both simultaneously.

Encrypted networks are fast becoming a problem because of the “burn time” (destruction of messages) and the fact that extremists can communicate mostly undetected.

Operations to attack and kill members of the public in the West have been propagated on these encrypted networks.

The extremists set up a unique way of communicating within encrypted channels to offer advice. That way a terrorist can directly communicate with the Islamic State group and receive directives to undertake an attack in a specific country, including operational methods and procedures.

This is extremely concerning, and authorities – including intelligence agencies and federal police – require access to encrypted networks to do their work more effectively. They need the ability to access servers to obtain vital information to help thwart possible attacks on home soil.

This access will need to be granted in consultation with the companies that offer these services. But such access could be challenging and there could also be a backlash from privacy groups.

Recruitment: find and follow key words

It was once thought that the process of recruitment occurred over extended periods of time. This is true in some instances, and it depends on a multitude of individual experiences, personality types, one’s perception of identity, and the types of strategies and techniques used in the recruitment process.

There is no one path toward violent extremism, but what makes the process of recruitment quicker is the neurolinguistic programming (NLP) method used by terrorists.

Extremists use NLP across multiple platforms and are quick to usher their recruits into encrypted chats.

Key terms are always used alongside NLP, such as “in the heart of green birds” (which is used in reference to martyrdom), “Istishhad” (operational heroism of loving death more than the West love life), “martyrdom” and “Shaheed” (becoming a martyr).

If social media companies know and understand these key terms, they can help by removing any reference to them on their platforms. This is being done by some platforms to a degree, but in many cases social media operaters still rely heavily on users reporting inappropriate material.

Create counter-narratives: banning alone won’t work

Since there are so many social media applications, each with a high volume of material that is both very dynamic and fluid, any attempts to deal with extremism must accept the limitations and challenges involved.

Attempts to shut down sites, channels, and web pages are just one approach. It is imperative that efforts are not limited to such strategies.

Counter-narratives are essential, as these deconstruct radical ideologies and expose their flaws in reasoning.

But these counter-narratives need to be more sophisticated given the ability of extremists to manipulate arguments and appeal to emotions, especially by using horrific images.

This is particularly important for those on the social fringe, who may feel a sense of alienation.

It is important for these individuals to realise that such feelings can be addressed within the context of mainstream Islam without resorting to radical ideologies that leave them open to exploitation by experienced recruiters. Such recruiters are well practised and know how to identify individuals who are struggling, and how to usher them along radical pathways.

Ultimately, there are ways around all procedures that attempt to tackle the problem of terrorist extremism on social media. But steps are slowly being taken to reduce the risk and spread of radical ideologies.

The ConversationThis must include counter-narratives as well as the timely eradication of extremist material based on keywords as well as any material from key radical preachers.

Robyn Torok, PhD, PhD – researcher and analyst, Edith Cowan University

This article was originally published on The Conversation. Read the original article.