Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Advertisements

Four ways social media companies and security agencies can tackle terrorism


Robyn Torok, Edith Cowan University

Prime Minister Malcolm Turnbull has joined Britain’s Prime Minister Theresa May in calling on social media companies to crack down on extremist material being published by users.

It comes in the wake of the recent terror attacks in Australia and Britain.

Facebook is considered a hotbed for terrorist recruitment, incitement, propaganda and the spreading of radical thinking. Twitter, YouTube and encrypted services such WhatsApp and Telegram are also implicated.

Addressing the extent of such content on social media requires international cooperation from large social media platforms themselves and encrypted services.

Some of that work is already underway by many social media operators, with Facebook’s rules on this leaked only last month. Twitter says that in one six-month period it has suspended 376,890 accounts related to the promotion of terrorism.

While these measures are a good start, more can be done. A focus on disruption, encryption, recruitment and creating counter-narratives is recommended.

Disruption: remove content, break flow-on

Disruption of terrorists on social media involves reporting and taking down of radical elements and acts of violence, whether that be radical accounts or posted content that breaches community safety and standards.

This is critical both in timing and eradication.

Disruption is vital for removing extreme content and breaking the flow-on effect while someone is in the process of being recruited by extremists.

Taking down accounts and content is difficult as there is often a large volume of content to remove. Sometimes it is not removed as quickly as needed. In addition, extremists typically have multiple accounts and can operate under various aliases at the same time.

Encryption: security authorities need access

When Islamic extremists use encrypted channels, it makes the fight against terrorism much harder. Extremists readily shift from public forums to encrypted areas, and often work in both simultaneously.

Encrypted networks are fast becoming a problem because of the “burn time” (destruction of messages) and the fact that extremists can communicate mostly undetected.

Operations to attack and kill members of the public in the West have been propagated on these encrypted networks.

The extremists set up a unique way of communicating within encrypted channels to offer advice. That way a terrorist can directly communicate with the Islamic State group and receive directives to undertake an attack in a specific country, including operational methods and procedures.

This is extremely concerning, and authorities – including intelligence agencies and federal police – require access to encrypted networks to do their work more effectively. They need the ability to access servers to obtain vital information to help thwart possible attacks on home soil.

This access will need to be granted in consultation with the companies that offer these services. But such access could be challenging and there could also be a backlash from privacy groups.

Recruitment: find and follow key words

It was once thought that the process of recruitment occurred over extended periods of time. This is true in some instances, and it depends on a multitude of individual experiences, personality types, one’s perception of identity, and the types of strategies and techniques used in the recruitment process.

There is no one path toward violent extremism, but what makes the process of recruitment quicker is the neurolinguistic programming (NLP) method used by terrorists.

Extremists use NLP across multiple platforms and are quick to usher their recruits into encrypted chats.

Key terms are always used alongside NLP, such as “in the heart of green birds” (which is used in reference to martyrdom), “Istishhad” (operational heroism of loving death more than the West love life), “martyrdom” and “Shaheed” (becoming a martyr).

If social media companies know and understand these key terms, they can help by removing any reference to them on their platforms. This is being done by some platforms to a degree, but in many cases social media operaters still rely heavily on users reporting inappropriate material.

Create counter-narratives: banning alone won’t work

Since there are so many social media applications, each with a high volume of material that is both very dynamic and fluid, any attempts to deal with extremism must accept the limitations and challenges involved.

Attempts to shut down sites, channels, and web pages are just one approach. It is imperative that efforts are not limited to such strategies.

Counter-narratives are essential, as these deconstruct radical ideologies and expose their flaws in reasoning.

But these counter-narratives need to be more sophisticated given the ability of extremists to manipulate arguments and appeal to emotions, especially by using horrific images.

This is particularly important for those on the social fringe, who may feel a sense of alienation.

It is important for these individuals to realise that such feelings can be addressed within the context of mainstream Islam without resorting to radical ideologies that leave them open to exploitation by experienced recruiters. Such recruiters are well practised and know how to identify individuals who are struggling, and how to usher them along radical pathways.

Ultimately, there are ways around all procedures that attempt to tackle the problem of terrorist extremism on social media. But steps are slowly being taken to reduce the risk and spread of radical ideologies.

The ConversationThis must include counter-narratives as well as the timely eradication of extremist material based on keywords as well as any material from key radical preachers.

Robyn Torok, PhD, PhD – researcher and analyst, Edith Cowan University

This article was originally published on The Conversation. Read the original article.

How to unfollow, mute or ignore people on Facebook, Twitter, Snapchat and more


Gigaom

We’ve all been there. Whether it’s a picture of your friend’s new baby or your Aunt’s incessant updates about the weather in smalltown America, there are certain people in your social-media feeds that you’d like to just tune out for a bit (even just temporarily).

Social networks seem to be listening and have been rolling out features to help users regain a little bit of control of their social feeds without ruffling the feathers of any friends. The problem is each network has its own definition of tuning out someone, not to mention its own terminology.

To help you out, I combed some of the most popular social networks and muted/blocked/ignored/unfollowed everyone and everything I could. For a quick look, see our chart below. But we also have step-by-step pictures and an easy-to-follow guide for each network to make it easy to mute away.

How-to Guides:
Facebook
Google Plus
Twitter

View original post 2,944 more words

Antisocial Social Networking


The link below is to an article that takes a look at ‘antisocial’ social networking and social networks.

For more visit:
http://www.theguardian.com/media/2014/jun/01/antisocial-networks-social-media-private-thoughts-apps-distance-online-friends-technology