Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.

Advertisements

How to stop haemorrhaging data on Facebook



File 20180405 189801 1wbjtyg.jpg?ixlib=rb 1.1
Every time you open an app, click a link, like a post, read an article, hover over an ad, or connect to someone, you are generating data.
Shutterstock

Belinda Barnet, Swinburne University of Technology

If you are one of 2.2 billion Facebook users worldwide, you have probably been alarmed by the recent coverage of the Cambridge Analytica scandal, a story that began when The Guardian revealed 50 million (now thought to be 87 million) user profiles had been retrieved and shared without the consent of users.

Though the #deletefacebook campaign has gained momentum on Twitter, it is simply not practical for most of us to delete our accounts. It is technically difficult to do, and given that one quarter of the human population is on the platform, there is an undeniable social cost for being absent.




Read more:
Why we should all cut the Facebook cord. Or should we?


It is also not possible to use or even to have a Facebook profile without giving up at least some data: every time you open the app, click a link, like a post, hover over an ad, or connect to someone, you are generating data. This particular type of data is not something you can control, because Facebook considers such data its property.

Every service has a price, and the price for being on Facebook is your data.

However, you can remain on Facebook (and other social media platforms like it) without haemorrhaging data. If you want stay in touch with those old school friends – despite the fact you will probably never see them again – here’s what you can do, step by step. The following instructions are tailored to Facebook settings on mobile.

Your location

The first place to start is with the device you are holding in your hand.
Facebook requests access to your GPS location by default, and unless you were reading the fine print when you installed the application (if you are that one person please tell me where you find the time), it will currently have access.

This means that whenever you open the app it knows where you are, and unless you have changed your location sharing setting from “Always” to “Never” or “Only while using”, it can track your location when you’re not using the app as well.

To keep your daily movements to yourself, go into Settings on Apple iPhone or Android, go to Location Services, and turn off or select “Never” for Facebook.

While you’re there, check for other social media apps with location access (like Twitter and Instagram) and consider changing them to “Never”.

Remember that pictures from your phone are GPS tagged too, so if you intend to share them on Facebook, revoke access to GPS for your camera as well.

Your content

The next thing to do is to control who can see what you post, who can see private information like your email address and phone number, and then apply these settings in retrospect to everything you’ve already posted.

Facebook has a “Privacy Shortcuts” tab under Settings, but we are going to start in Account Settings > Privacy.

You control who sees what you post, and who sees the people and pages you follow, by limiting the audience here.

Change “Who can see your future posts” and “Who can see the people and pages you follow” to “Only Friends”.

In the same menu, if you scroll down, you will see a setting called “Do you want search engines outside of Facebook to link to your profile?” Select No.

After you have made these changes, scroll down and limit the audience for past posts. Apply the new setting to all past posts, even though Facebook will try to alarm you. “The only way to undo this is to change the audience of each post one at a time! Oh my Goodness! You’ll need to change 1,700 posts over ten years.” Ignore your fears and click Limit.




Read more:
It’s time for third-party data brokers to emerge from the shadows


Next go in to Privacy Shortcuts – this is on the navigation bar below Settings. Then select Privacy Checkup. Limit who can see your personal information (date of birth, email address, phone number, place of birth if you provided it) to “Only Me”.

Third party apps

Every time you use Facebook to “login” to a service or application you are granting both Facebook and the third-party service access to your data.

Facebook has pledged to investigate and change this recently as a result of the Cambridge Analytica scandal, but in the meantime, it is best not to use Facebook to login to third party services. That includes Bingo Bash unfortunately.

The third screen of Privacy Checkup shows you which apps have access to your data at present. Delete any that you don’t recognise or that are unnecessary.

In the final step we will be turning off “Facebook integration” altogether. This is optional. If you choose to do this, it will revoke permission for all previous apps, plugins, and websites that have access to your data. It will also prevent your friends from harvesting your data for their apps.

In this case you don’t need to delete individual apps as they will all disappear.

Turning off Facebook integration

If you want to be as secure as it is possible to be on Facebook, you can revoke third-party access to your content completely. This means turning off all apps, plugins and websites.

If you take this step Facebook won’t be able to receive information about your use of apps outside of Facebook and apps won’t be able to receive your Facebook data.

If you’re a business this is not a good idea as you will need it to advertise and to test apps. This is for personal pages.

It may make life a little more difficult for you in that your next purchase from Farfetch will require you to set up your own account rather than just harvest your profile. Your Klout score may drop because it can’t see Facebook and that might feel terrible.

Remember this setting only applies to the data you post and provide yourself. The signals you generate using Facebook (what you like, click on, read) will still belong to Facebook and will be used to tailor advertising.

To turn off Facebook integration, go into Settings, then Apps. Select Apps, websites and games.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


Facebook will warn you about all the Farmville updates you will miss and how you will have a hard time logging in to The Guardian without Facebook. Ignore this and select “Turn off”.

The ConversationWell done. Your data is now as secure as it is possible to be on Facebook. Remember, though, that everything you do on the platform still generates data.

Belinda Barnet, Senior Lecturer in Media and Communications, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.

It’s time for third-party data brokers to emerge from the shadows



File 20180404 189813 1ihb282.jpg?ixlib=rb 1.1
Personal data has been dubbed the “new oil”, and data brokers are very efficient miners.
Emanuele Toscano/Flickr, CC BY-NC-ND

Sacha Molitorisz, University of Technology Sydney

Facebook announced last week it would discontinue the partner programs that allow advertisers to use third-party data from companies such as Acxiom, Experian and Quantium to target users.

Graham Mudd, Facebook’s product marketing director, said in a statement:

We want to let advertisers know that we will be shutting down Partner Categories. This product enables third party data providers to offer their targeting directly on Facebook. While this is common industry practice, we believe this step, winding down over the next six months, will help improve people’s privacy on Facebook.

Few people seemed to notice, and that’s hardly surprising. These data brokers operate largely in the background.

The invisible industry worth billions

In 2014, one researcher described the entire industry as “largely invisible”. That’s no mean feat, given how much money is being made. Personal data has been dubbed the “new oil”, and data brokers are very efficient miners. In the 2018 fiscal year, Acxiom expects annual revenue of approximately US$945 million.

The data broker business model involves accumulating information about internet users (and non-users) and then selling it. As such, data brokers have highly detailed profiles on billions of individuals, comprising age, race, sex, weight, height, marital status, education level, politics, shopping habits, health issues, holiday plans, and more.




Read more:
Facebook data harvesting: what you need to know


These profiles come not just from data you’ve shared, but from data shared by others, and from data that’s been inferred. In its 2014 report into the industry, the US Federal Trade Commission (FTC) showed how a single data broker had 3,000 “data segments” for nearly every US consumer.

Based on the interests inferred from this data, consumers are then placed in categories such as “dog owner” or “winter activity enthusiast”. However, some categories are potentially sensitive, including “expectant parent”, “diabetes interest” and “cholesterol focus”, or involve ethnicity, income and age. The FTC’s Jon Leibowitz described data brokers as the “unseen cyberazzi who collect information on all of us”.

In Australia, Facebook launched the Partner Categories program in 2015. Its aim was to “reach people based on what they do and buy offline”. This includes demographic and behavioural data, such as purchase history and home ownership status, which might come from public records, loyalty card programs or surveys. In other words, Partner Categories enables advertisers to use data brokers to reach specific audiences. This is particularly useful for companies that don’t have their own customer databases.

A growing concern

Third party access to personal data is causing increasing concern. This week, Grindr was shown to be revealing its users’ HIV status to third parties. Such news is unsettling, as if there are corporate eavesdroppers on even our most intimate online engagements.

The recent Cambridge Analytica furore stemmed from third parties. Indeed, apps created by third parties have proved particularly problematic for Facebook. From 2007 to 2014, Facebook encouraged external developers to create apps for users to add content, play games, share photos, and so on.




Read more:
Your online privacy depends as much on your friends’ data habits as your own


Facebook then gave the app developers wide-ranging access to user data, and to users’ friends’ data. The data shared might include details of schooling, favourite books and movies, or political and religious affiliations.

As one group of privacy researchers noted in 2011, this process, “which nearly invisibly shares not just a user’s, but a user’s friends’ information with third parties, clearly violates standard norms of information flow”.

With the Partner Categories program, the buying, selling and aggregation of user data may be largely hidden, but is it unethical? The fact that Facebook has moved to stop the arrangement suggests that it might be.

More transparency and more respect for users

To date, there has been insufficient transparency, insufficient fairness and insufficient respect for user consent. This applies to Facebook, but also to app developers, and to Acxiom, Experian, Quantium and other data brokers.

Users might have clicked “agree” to terms and conditions that contained a clause ostensibly authorising such sharing of data. However, it’s hard to construe this type of consent as morally justifying.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


In Australia, new laws are needed. Data flows in complex and unpredictable ways online, and legislation ought to provide, under threat of significant penalties, that companies (and others) must abide by reasonable principles of fairness and transparency when they deal with personal information. Further, such legislation can help specify what sort of consent is required, and in which contexts. Currently, the Privacy Act doesn’t go far enough, and is too rarely invoked.

In its 2014 report, the US Federal Trade Commission called for laws that enabled consumers to learn about the existence and activities of data brokers. That should be a starting point for Australia too: consumers ought to have reasonable access to information held by these entities.

Time to regulate

Having resisted regulation since 2004, Mark Zuckerberg has finally conceded that Facebook should be regulated – and advocated for laws mandating transparency for online advertising.

Historically, Facebook has made a point of dedicating itself to openness, but Facebook itself has often operated with a distinct lack of openness and transparency. Data brokers have been even worse.

The ConversationFacebook’s motto used to be “Move fast and break things”. Now Facebook, data brokers and other third parties need to work with lawmakers to move fast and fix things.

Sacha Molitorisz, Postdoctoral Research Fellow, Centre for Media Transition, Faculty of Law, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Why the business model of social media giants like Facebook is incompatible with human rights



File 20180329 189824 1k13qax.jpg?ixlib=rb 1.1
Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance.
EPA/Peter DaSilva

Sarah Joseph, Monash University

Facebook has had a bad few weeks. The social media giant had to apologise for failing to protect the personal data of millions of users from being accessed by data mining company Cambridge Analytica. Outrage is brewing over its admission to spying on people via their Android phones. Its stock price plummeted, while millions deleted their accounts in disgust.

Facebook has also faced scrutiny over its failure to prevent the spread of “fake news” on its platforms, including via an apparent orchestrated Russian propaganda effort to influence the 2016 US presidential election.

Facebook’s actions – or inactions – facilitated breaches of privacy and human rights associated with democratic governance. But it might be that its business model – and those of its social media peers generally – is simply incompatible with human rights.

The good

In some ways, social media has been a boon for human rights – most obviously for freedom of speech.

Previously, the so-called “marketplace of ideas” was technically available to all (in “free” countries), but was in reality dominated by the elites. While all could equally exercise the right to free speech, we lacked equal voice. Gatekeepers, especially in the form of the mainstream media, largely controlled the conversation.

But today, anybody with internet access can broadcast information and opinions to the whole world. While not all will be listened to, social media is expanding the boundaries of what is said and received in public. The marketplace of ideas must effectively be bigger and broader, and more diverse.

Social media enhances the effectiveness of non-mainstream political movements, public assemblies and demonstrations, especially in countries that exercise tight controls over civil and political rights, or have very poor news sources.

Social media played a major role in co-ordinating the massive protests that brought down dictatorships in Tunisia and Egypt, as well as large revolts in Spain, Greece, Israel, South Korea, and the Occupy movement. More recently, it has facilitated the rapid growth of the #MeToo and #neveragain movements, among others.




Read more:
#MeToo is not enough: it has yet to shift the power imbalances that would bring about gender equality


The bad and the ugly

But the social media “free speech” machines can create human rights difficulties. Those newly empowered voices are not necessarily desirable voices.

The UN recently found that Facebook had been a major platform for spreading hatred against the Rohingya in Myanmar, which in turn led to ethnic cleansing and crimes against humanity.

Video sharing site YouTube seems to automatically guide viewers to the fringiest versions of what they might be searching for. A search on vegetarianism might lead to veganism; jogging to ultra-marathons; Donald Trump’s popularity to white supremacist rants; and Hillary Clinton to 9/11 trutherism.

YouTube, via its algorithm’s natural and probably unintended impacts, “may be one of the most powerful radicalising instruments of the 21st century”, with all the attendant human rights abuses that might follow.

The business model and human rights

Human rights abuses might be embedded in the business model that has evolved for social media companies in their second decade.

Essentially, those models are based on the collection and use for marketing purposes of their users’ data. And the data they have is extraordinary in its profiling capacities, and in the consequent unprecedented knowledge base and potential power it grants to these private actors.

Indirect political influence is commonly exercised, even in the most credible democracies, by private bodies such as major corporations. This power can be partially constrained by “anti-trust laws” that promote competition and prevent undue market dominance.

Anti-trust measures could, for example, be used to hive off Instagram from Facebook, or YouTube from Google. But these companies’ power essentially arises from the sheer number of their users: in late 2017, Facebook was reported as having more than 2.2 billion active users. Anti-trust measures do not seek to cap the number of a company’s customers, as opposed to its acquisitions.

In late 2017, Facebook was reported as having more than 2.2 billion active users.
EPA/Ritchie B. Tongo

Power through knowledge

In 2010, Facebook conducted an experiment by randomly deploying a non-partisan “I voted” button into 61 million feeds during the US mid-term elections. That simple action led to 340,000 more votes, or about 0.14% of the US voting population. This number can swing an election. A bigger sample would lead to even more votes.

So Facebook knows how to deploy the button to sway an election, which would clearly be lamentable. However, the mere possession of that knowledge makes Facebook a political player. It now knows that button’s the political impact, the types of people it is likely to motivate, and the party that’s favoured by its deployment and non-deployment, and at what times of day.

It might seem inherently incompatible with democracy for that knowledge to be vested in a private body. Yet the retention of such data is the essence of Facebook’s ability to make money and run a viable business.




Read more:
Can Facebook influence an election result?


Microtargeting

A study has shown that a computer knows more about a person’s personality than their friends or flatmates from an analysis of 70 “likes”, and more than their family from 150 likes. From 300 likes it can outperform one’s spouse.

This enables the micro-targeting of people for marketing messages – whether those messages market a product, a political party or a cause. This is Facebook’s product, from which it generates billions of dollars. It enables extremely effective advertising and the manipulation of its users. This is so even without Cambridge Analytica’s underhanded methods.

Advertising is manipulative: that is its point. Yet it is a long bow to label all advertising as a breach of human rights.

Advertising is available to all with the means to pay. Social media micro-targeting has become another battleground where money is used to attract customers and, in the political arena, influence and mobilise voters.

While the influence of money in politics is pervasive – and probably inherently undemocratic – it seems unlikely that spending money to deploy social media to boost an electoral message is any more a breach of human rights than other overt political uses of money.

Yet the extraordinary scale and precision of its manipulative reach might justify differential treatment of social media compared to other advertising, as its manipulative political effects arguably undermine democratic choices.

As with mass data collection, perhaps it may eventually be concluded that that reach is simply incompatible with democratic and human rights.

‘Fake news’

Finally, there is the issue of the spread of misinformation.

While paid advertising may not breach human rights, “fake news” distorts and poisons democratic debate. It is one thing for millions of voters to be influenced by precisely targeted social media messages, but another for maliciously false messages to influence and manipulate millions – whether paid for or not.

In a Declaration on Fake News, several UN and regional human rights experts said fake news interfered with the right to know and receive information – part of the general right to freedom of expression.

Its mass dissemination may also distort rights to participate in public affairs. Russia and Cambridge Analytica (assuming allegations in both cases to be true) have demonstrated how social media can be “weaponised” in unanticipated ways.

Yet it is difficult to know how social media companies should deal with fake news. The suppression of fake news is the suppression of speech – a human right in itself.

The preferred solution outlined in the Declaration on Fake News is to develop technology and digital literacy to enable readers to more easily identify fake news. The human rights community seems to be trusting that the proliferation of fake news in the marketplace of ideas can be corrected with better ideas rather than censorship.

However, one cannot be complacent in assuming that “better speech” triumphs over fake news. A recent study concluded fake news on social media:

… diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information.

Also, internet “bots” apparently spread true and false news at the same rate, which indicates that:

… false news spreads more than the truth because humans, not robots, are more likely to spread it.

The depressing truth may be that human nature is attracted to fake stories over the more mundane true ones, often because they satisfy predetermined biases, prejudices and desires. And social media now facilitates their wildfire spread to an unprecedented degree.

Perhaps social media’s purpose – the posting and sharing of speech – cannot help but generate a distorted and tainted marketplace of fake ideas that undermine political debate and choices, and perhaps human rights.

Fake news disseminated by social media is argued to have played a role in electing Donald Trump to the presidency.
EPA/Jim Lo Scalzo

What next?

It is premature to assert the very collection of massive amounts of data is irreconcilable with the right to privacy (and even rights relating to democratic governance).

Similarly, it is premature to decide that micro-targeting manipulates the political sphere beyond the bounds of democratic human rights.

Finally, it may be that better speech and corrective technology will help to undo fake news’ negative impacts: it is premature to assume that such solutions won’t work.

However, by the time such conclusions may be reached, it may be too late to do much about it. It may be an example where government regulation and international human rights law – and even business acumen and expertise – lags too far behind technological developments to appreciate their human rights dangers.

The ConversationAt the very least, we must now seriously question the business models that have emerged from the dominant social media platforms. Maybe the internet should be rewired from the grassroots, rather than be led by digital oligarchs’ business needs.

Sarah Joseph, Director, Castan Centre for Human Rights Law, Monash University

This article was originally published on The Conversation. Read the original article.

Your online privacy depends as much on your friends’ data habits as your own



File 20180326 54872 1q80274.jpg?ixlib=rb 1.1
Many social media users have been shocked to learn the extent of their digital footprint.
Shutterstock

Vincent Mitchell, University of Sydney; Andrew Stephen, University of Oxford, and Bernadette Kamleitner, Vienna University of Economics and Business

In the aftermath of revelations about the alleged misuse of Facebook user data by Cambridge Analytica, many social media users are educating themselves about their own digital footprint. And some are shocked at the extent of it.

Last week, one user took advantage of a Facebook feature that enables you to download all the information the company stores about you. He found his call and SMS history in the data dump – something Facebook says is an opt-in feature for those using Messenger and Facebook Lite on Android.

//platform.twitter.com/widgets.js

This highlights an issue that we don’t talk about enough when it comes to data privacy: that the security of our data is dependent not only on our own vigilance, but also that of those we interact with.

It’s easy for friends to share our data

In the past, personal data was either captured in our memories or in physical objects, such as diaries or photo albums. If a friend wanted data about us, they would have to either observe us or ask us for it. That requires effort, or our consent, and focuses on information that is both specific and meaningful.

Nowadays, data others hold about us is given away easily. That’s partly because the data apps ask for is largely intangible and invisible, as well as vague rather than specific.




Read more:
We need to talk about the data we give freely of ourselves online and why it’s useful


What’s more, it doesn’t seem to take much to get us to give away other people’s data in return for very little, with one study finding 98% of MIT students would give away their friends’ emails when promised free pizza.

Other studies have shown that collaborating in folders on cloud services, such as Google Drive, can result in privacy losses that are 39% higher due collaborators installing third-party apps you wouldn’t choose to install yourself. Facebook’s data download tool poses another risk in that once the data is taken out of Facebook it becomes even easier to copy and distribute.

This shift from personal to interdependent online privacy reliant on our friends, family and colleagues is a seismic one for the privacy agenda.

How much data are we talking about?

With more than 3.5 million apps on Google Play alone, the collection of data from our friends via back-door methods is more common than we might think. The back-door opens when you press “accept” to permissions to give access to your contacts when installing an app.

WhatsApp might have your contact information even if you aren’t a registered user.
Screen Shot at 1pm on 26 March 2018

Then the data harvesting machinery begins its work – often in perpetuity, and without us knowing or understanding what will be done with it. More importantly, our friends never agreed to us giving away their data. And we have a lot of friends’ data to harvest.




Read more:
Explainer: what is differential privacy and how can it protect your data?


The average Australian has 234 Facebook friends. Large-scale data collection is easy in an interconnected world when each person who signs up for an app has 234 friends, and each of them has 234 and, so on. That’s how Cambridge Analytica was apparently able to collect information on up to 50 million users, with permission from just 270,000.

Add to that the fact that the average person uses nine different apps on a daily basis. Once installed, some of these apps can harvest data on a daily basis without your friends knowing and 70% of apps share it with third parties.




Read more:
7 in 10 smartphone apps share your data with third-party services


We’re more likely to refuse data requests that are specific

Around 60% of us never, or only occasionally, review the privacy policy and permissions requested by an app before downloading. And in our own research conducted with a sample of 287 London business students, 96% of participants failed to realise the scope of all the information they were giving away.

However, this can be changed by making a data request more specific – for example, by separating out “contacts” from “photos”. When we asked participants if they had the right to give all the data on their phone, 95% said yes. But when they focused on just contacts, this decreased to 80%.

We can take this further with a thought experiment. Imagine if an app asked you for your “contacts, including your grandmother’s phone number and your daughter’s photos”. Would you be more likely to say no? The reality of what you are actually giving away in these consent agreements becomes more apparent with a specific request.

The silver lining is more vigilance

This new reality not only threatens moral codes and friendships, but can cause harm from hidden viruses, malware, spyware or adware. We may also be subject to prosecution as in a recent German case in which a judge ruled that giving away your friend’s data on Whatsapp without their permission was wrong.




Read more:
Australia should strengthen its privacy laws and remove exemptions for politicians


Although company policies on privacy can help, these are difficult to police. Facebook’s “platform policy” at the time the Cambridge Analytica data was harvested only allowed the collection of friends’ data to improve the user experience of an app, while preventing it from being sold on or used for advertising. But this puts a huge burden on companies to police, investigate and enforce these policies. It’s a task few can afford, and even a company the size of Facebook failed.

The ConversationThe silver lining to the Cambridge Analytica case is that more and more people are recognising that the idea of “free” digital services is an illusion. The price we pay is not only our own privacy, but the privacy of our friends, family and colleagues.

Vincent Mitchell, Professor of Marketing, University of Sydney; Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, University of Oxford, and Bernadette Kamleitner, , Vienna University of Economics and Business

This article was originally published on The Conversation. Read the original article.

Google, Facebook fall into line on tax, but eBay remains defiant


Michael West, University of Sydney

Under pressure from the Australian Tax Office, Google and Facebook have begun to bring their revenue onshore to be taxed. eBay remains recalcitrant, still deeming its Australian business to be a Swiss business and thereby avoiding millions in income tax and GST. The Conversation

It is multinational reporting season once again and the early signs are the government’s multinational tax avoidance laws are starting to work. But the world’s largest corporations are still paying a fraction of their fair share of tax in this country.

Until this year, Google and Facebook entertained a corporate structure that booked the billions of dollars of revenue they made in Australia directly offshore. However, eBay is still blithely pretending it doesn’t have an Australian business and that the billion dollars a year it makes from operating its online auction house in this country – through which Australians buy and sell things with other Australians in Australia – is really the business of an entity residing at 15 Helvetiastrasse, Bern, Switzerland.

According to its accounts, the latest for the year to December 2016, eBay Australia is still masquerading as being in the business of “the recommendation of market penetration strategies” on behalf of eBay International AG.

So it is that every cent of the $59 million that eBay disclosed as its cash-flow statement for 2016 came from related parties, mostly for “rendering of services”. On this, eBay paid $1.9 million in tax after ratcheting up its costs by $13 million to wipe out most of the $20 million uplift in cashflow. The average salary at eBay, if the accounts can be believed, is $312,553 – 109 employees, according to the directors’ report, getting $34.1 million.

Mind you, according to the directors’ report, these 109 people are engaged in carrying out the principal activities of the company, which are “the recommendation of market penetration strategies, advertising and promotion activities”.

Gobbledygook, but the numbers are irrelevant anyway. The estimated billion dollars or more which eBay is said to make in Australia is not even included in its financial statements, just the revenue from its secretive associates. Moreover the accounts are not consolidated, according to the notes, rendering the entire disclosure a farce. Auditor is PwC.

Funnily, though, the cover page Form 388, authenticated by EY, talks about “consolidated revenue” and “consolidated gross assets” – despite the fact that PwC says the accounts are not consolidated.

So eBay is the quintessence of the undisclosed agency, a puppet regime designed to whisk Australian profits offshore to a tax haven. The shadow directors are in Bern and the ultimate parent eBay Inc is in the US.

Over the past 15 years, eBay has dodged GST and paid income tax of just $8 million (almost one-fifth of its bill for “professional fees” at $38 million), despite its billions of dollars in cash-flow.

Positive signs of change

Focusing on more positive developments on the multinational tax scene, arch-tax avoider Google Australia and New Zealand is now recognising that a portion of the profits it makes in Australia are in fact Australian rather than Singaporean.

Industry observers believe Google makes about $3 billion in sales from its advertising business here. Until this year, its only revenue has come from three related parties via service arrangements. Now, with the introduction of the multinational anti-avoidance legislation, Google has recognised roughly one-third of its Australian revenue as Australian.

In the broader context it is worth considering the effect of the digital revolution on Australia’s tax base.

Where the TV networks, News Ltd (though belligerent on the tax front) and Fairfax Media once paid hundreds of millions of dollars a year in tax collectively, they are now struggling to make a profit. In their place, it is estimated Facebook and Google now pick up 80% of the advertising dollar in this country but they pay negligible tax.

Globalisation and the internet are similarly challenging Australia’s revenue base in retail, financial services and other sectors. Paypal, for instance, eBay’s corporate cousin, paid more than $1 billion of its $1.2 billion in revenues to its parent and associates in Singapore over the nine years to 2014 thanks to a “service agreement”.

Looking at the accounts, thanks to the new tax law, revenue rose from $498 million to $1.14 billion. Sales and marketing expenses, however, recognised for the first time at $324 million, knocked profits about. Profit rose from $50 million to $121 million on which tax expense was $16 million, up from $3 million.

Actual tax paid as per the cash-flow statement was $41 million, up from $16 million. So, like Apple, Google is beginning to pay significant amounts of tax, although still way short of the mark, and it appears to have bloated its cost base here as much as humanly possible. Assuming group sales are heading towards $3 billion (Google booked $882 million in advertising revenue), the real income tax number ought to have nine digits.

For its part, Facebook booked revenue of $327 million, ten times the $33.5 million recorded in the the previous year. After forking out $271 million to related parties for the “purchase of advertising inventory”, it made a profit of just $6.3 million on which it paid $3.4 million in tax.

Under its previous structure, Facebook sales were booked to an associate in Ireland. For the purposes of reporting as little as possible, the company even won an exemption from the corporate regulator when it claimed to be a “Small Pty Company Controlled By a Foreign Coy Which is Not Part of Large Group”. That its foreign parent was valued at more than $170 billion on Wall Street didn’t seem to matter.

Now, Facebook has declared itself to be a reseller of local advertising inventory. Both Google and Facebook are audited by EY.

None of these companies operate to maximise profits for the benefit of their Australian entities. All have small, token boards of directors. All operate in the interests of their foreign overlords and should be taxed as agencies.

It is a good thing the authorities are catching up with multinational tax lurks. This would not have occurred without public outrage and dissent. Nor would it have occurred without the Senate Inquiry into Corporate Tax Avoidance in 2015, which thrust the issues into public view. They should keep this Senate committee rolling with biannual investigations where corporate leaders are held to account and subject to full public scrutiny. After all, directors have a fiduciary duty to perform in the interests of their companies, not some tax officer in California.

Further, the architects of multinational tax avoidance – EY, Deloitte, PwC and KPMG – ought to be subject to greater disclosure requirements rather than operating as murky partnerships whose partners pontificate to government on tax policy while advising their big clients how best not to pay tax, or “leakage” as they call it in the trade.

Michael West, Adjunct Associate Professor, School of Social and Political Sciences, University of Sydney

This article was originally published on The Conversation. Read the original article.

Antisocial Social Networking


The link below is to an article that takes a look at ‘antisocial’ social networking and social networks.

For more visit:
http://www.theguardian.com/media/2014/jun/01/antisocial-networks-social-media-private-thoughts-apps-distance-online-friends-technology