Apple’s new ‘app tracking transparency’ has angered Facebook. How does it work, what’s all the fuss about, and should you use it?


Amr Alfiky/AP

Paul Haskell-Dowland, Edith Cowan University and Nikolai Hampton, Edith Cowan UniversityApple users across the globe are adopting the latest operating system update, called iOS 14.5, featuring the now-obligatory new batch of emojis.

But there’s another change that’s arguably less fun but much more significant for many users: the introduction of “app tracking transparency”.

This feature promises to usher in a new era of user-oriented privacy, and not everyone is happy — most notably Facebook, which relies on tracking web users’ browsing habits to sell targeted advertising. Some commentators have described it as the beginnings of a new privacy feud between the two tech behemoths.

So, what is app tracking transparency?

App tracking transparency is a continuation of Apple’s push to be recognised as the platform of privacy. The new feature allows apps to display a pop-up notification that explains what data the app wants to collect, and what it proposes to do with it.

Privacy | App Tracking Transparency | Apple.

There is nothing users need to do to gain access to the new feature, other than install the latest iOS update, which happens automatically on most devices. Once upgraded, apps that use tracking functions will display a request to opt in or out of this functionality.

iPhone screenshot showing new App Tracking Transparency functionality
A new App Tracking Transparency feature across iOS, iPadOS, and tvOS will require apps to get the user’s permission before tracking their data across apps or websites owned by other companies.
Apple newsroom

How does it work?

As Apple has explained, the app tracking transparency feature is a new “application programming interface”, or API — a suite of programming commands used by developers to interact with the operating system.

The API gives software developers a few pre-canned functions that allow them to do things like “request tracking authorisation” or use the tracking manager to “check the authorisation status” of individual apps.

In more straightforward terms, this gives app developers a uniform way of requesting these tracking permissions from the device user. It also means the operating system has a centralised location for storing and checking what permissions have been granted to which apps.

What is missing from the fine print is that there is no physical mechanism to prevent the tracking of a user. The app tracking transparency framework is merely a pop-up box.

It is also interesting to note the specific wording of the pop-up: “ask app not to track”. If the application is using legitimate “device advertising identifiers”, answering no will result in this identifier being set to zero. This will reduce the tracking capabilities of apps that honour Apple’s tracking policies.

However, if an app is really determined to track you, there are many techniques that could allow them to make surreptitious user-specific identifiers, which may be difficult for Apple to detect or prevent.

For example, while an app might not use Apple’s “device advertising identifier”, it would be easy for the app to generate a little bit of “random data”. This data could then be passed between sites under the guise of normal operations such as retrieving an image with the data embedded in the filename. While this would contravene Apple’s developer rules, detecting this type of secret data could be very difficult.




Read more:
Your smartphone apps are tracking your every move – 4 essential reads


Apple seems prepared to crack down hard on developers who don’t play by the rules. The most recent additions to Apple’s App Store guidelines explicitly tells developers:

You must receive explicit permission from users via the App Tracking Transparency APIs to track their activity.

It’s unlikely major app developers will want to fall foul of this policy — a ban from the App Store would be costly. But it’s hard to imagine Apple sanctioning a really big player like Facebook or TikTok without some serious behind-the-scenes negotiation.

Why is Facebook objecting?

Facebook is fuelled by web users’ data. Inevitably, anything that gets in the way of its gargantuan revenue-generating network is seen as a threat. In 2020, Facebook’s revenue from advertising exceeded US$84 billion – a 21% rise on 2019.

The issues are deep-rooted and reflect the two tech giants’ very different business models. Apple’s business model is the sale of laptops, computers, phones and watches – with a significant proportion of its income derived from the vast ecosystem of apps and in-app purchases used on these devices. Apple’s app revenue was reported at US$64 billion in 2020.

With a vested interest in ensuring its customers are loyal and happy with its devices, Apple is well positioned to deliver privacy without harming profits.

Should I use it?

Ultimately, it is a choice for the consumer. Many apps and services are offered ostensibly for free to users. App developers often cover their costs through subscription models, in-app purchases or in-app advertising. If enough users decide to embrace privacy controls, developers will either change their funding model (perhaps moving to paid apps) or attempt to find other ways to track users to maintain advertising-derived revenue.

If you don’t want your data to be collected (and potentially sold to unnamed third parties), this feature offers one way to restrict the amount of your data that is trafficked in this way.

But it’s also important to note that tracking of users and devices is a valuable tool for advertising optimisation by building a comprehensive picture of each individual. This increases the relevance of each advert while also reducing advertising costs (by only targeting users who are likely to be interested). Users also arguably benefit, as they see more (relevant) adverts that are contextualised for their interests.

It may slow down the rate at which we receive personalised ads in apps and websites, but this change won’t be an end to intrusive digital advertising. In essence, this is the price we pay for “free” access to these services.




Read more:
Facebook data breach: what happened and why it’s hard to know if your data was leaked


The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Nikolai Hampton, School of Science, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There’s no such thing as ‘alternative facts’. 5 ways to spot misinformation and stop sharing it online



Shutterstock

Mark Pearson, Griffith University

The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.

Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.

We are living in a dangerous age where the internet makes it possible to spread misinformation far and wide and most people lack the basic fact-checking abilities to discern fact from fiction — or, worse, the desire to develop a healthy skepticism at all.




Read more:
Stopping the spread of COVID-19 misinformation is the best 2021 New Year’s resolution


Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.

Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills:

1. Distinguishing verified facts from myths, rumours and opinions

Cold, hard facts are the building blocks for considered and reasonable opinions in politics, media and law.

And there are no such things as “alternative facts” — facts are facts. Just because a falsity has been repeated many times by important people and their affiliates does not make it true.

We cannot expect the average citizen to have the skills of an academic researcher, journalist or judge in determining the veracity of an asserted statement. However, we can teach people some basic strategies before they mistake mere assertions for actual facts.

Does a basic internet search show these assertions have been confirmed by usually reliable sources – such as non-partisan mainstream news organisations, government websites and expert academics?

Students are taught to look to the URL of more authoritative sites — such as .gov or .edu — as a good hint at the factual basis of an assertion.

Searches and hashtags in social media are much less reliable as verification tools because you could be fishing within the “bubble” (or “echo chamber”) of those who share common interests, fears and prejudices – and are more likely to be perpetuating myths and rumours.

2. Mixing up your media and social media diet

We need to be break out of our own “echo chambers” and our tendencies to access only the news and views of those who agree with us, on the topics that interest us and where we feel most comfortable.

For example, over much of the past five years, I have deliberately switched between various conservative and liberal media outlets when something important has happened in the US.

By looking at the coverage of the left- and right-wing media, I can hope to find a common set of facts both sides agree on — beyond the partisan rhetoric and spin. And if only one side is reporting something, I know to question this assertion and not just take it at face value.

3. Being skeptical and assessing the factual premise of an opinion

Journalism students learn to approach the claims of their sources with a “healthy skepticism”. For instance, if you are interviewing someone and they make what seems to be a bold or questionable claim, it’s good practice to pause and ask what facts the claim is based on.

Students are taught in media law this is the key to the fair comment defence to a defamation action. This permits us to publish defamatory opinions on matters of public interest as long as they are reasonably based on provable facts put forth by the publication.

The ABC’s Media Watch used this defence successfully (at trial and on appeal) when it criticised a Sydney Sun-Herald journalist’s reporting that claimed toxic materials had been found near a children’s playground.

This assessment of the factual basis of an opinion is not reserved for defamation lawyers – it is an exercise we can all undertake as we decide whether someone’s opinion deserves our serious attention and republication.




Read more:
Teaching children digital literacy skills helps them navigate and respond to misinformation


4. Exploring the background and motives of media and sources

A key skill in media literacy is the ability to look behind the veil of those who want our attention — media outlets, social media influencers and bloggers — to investigate their allegiances, sponsorships and business models.

For instance, these are some key questions to ask:

  • who is behind that think tank whose views you are retweeting?

  • who owns the online newspaper you read and what other commercial interests do they hold?

  • is your media diet dominated by news produced from the same corporate entity?

  • why does someone need to be so loud or insulting in their commentary; is this indicative of their neglect of important facts that might counter their view?

  • what might an individual or company have to gain or lose by taking a position on an issue, and how might that influence their opinion?

Just because someone has an agenda does not mean their facts are wrong — but it is a good reason to be even more skeptical in your verification processes.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


5. Reflecting and verifying before sharing

We live in an era of instant republication. We immediately retweet and share content we see on social media, often without even having read it thoroughly, let alone having fact-checked it.

Mindful reflection before pressing that sharing button would allow you to ask yourself, “Why am I even choosing to share this material?”

You could also help shore up democracy by engaging in the fact-checking processes mentioned above to avoid being part of the problem by spreading misinformation.The Conversation

Mark Pearson, Professor of Journalism and Social Media, Griffith Centre for Social and Cultural Research, Griffith University, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Why social media platforms banning Trump won’t stop — or even slow down — his cause


Bronwyn Carlson, Macquarie University

Last week Twitter permanently suspended US President Donald Trump in the wake of his supporters’ violent storming of Capitol Hill. Trump was also suspended from Facebook and Instagram indefinitely.

Heads quickly turned to the right-wing Twitter alternative Parler — which seemed to be a logical place of respite for the digitally de-throned president.

But Parler too was axed, as Amazon pulled its hosting services and Google and Apple removed it from their stores. The social network, which has since sued Amazon, is effectively shut down until it can secure a new host or force Amazon to restore its services.

These actions may seem like legitimate attempts by platforms to tackle Trump’s violence-fuelling rhetoric. The reality, however, is they will do little to truly disengage his supporters or deal with issues of violence and hate speech.

With an election vote count of 74,223,744 (46.9%), the magnitude of Trump’s following is clear. And since being banned from Twitter, he hasn’t shown any intention of backing down.

In his first appearance since the Capitol attack, Trump described the impeachment process as ‘a continuation of the greatest witch hunt in the history of politics’.

Not budging

With more than 47,000 original tweets from Trump’s personal Twitter account (@realdonaldtrump) since 2009, one could argue he used the platform inordinately. There’s much speculation about what he might do now.

Tweeting via the official Twitter account for the president @POTUS, he said he might consider building his own platform. Twitter promptly removed this tweet. He also tweeted: “We will not be SILENCED!”.

This threat may come with some standing as Trump does have avenues to control various forms of media. In November, Axios reported he was considering launching his own right-wing media venture.

For his followers, the internet remains a “natural hunting ground” where they can continue gaining support through spreading racist and hateful sentiment.

The internet is also notoriously hard to police – it has no real borders, and features such as encryption enable anonymity. Laws differ from state to state and nation to nation; an act deemed illegal in one locale may be legal elsewhere.

It’s no surprise groups including fascists, neo-Nazis, anti-Semites and white supremacists were early and eager adopters of the internet. Back in 1998, former Ku Klux Klan Grand Wizard David Duke wrote online:

I believe that the internet will begin a chain reaction of racial enlightenment that will shake the world by the speed of its intellectual conquest.

As far as efforts to quash such extremism go, they’re usually too little, too late.

Take Stormfront, a neo-Nazi platform described as the web’s first major racial hate site. It was set up in 1995 by a former Klan state leader, and only removed from the open web 22 years later in 2017.




Read more:
Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


The psychology of hate

Banning Trump from social media won’t necessarily silence him or his supporters. Esteemed British psychiatrist and broadcaster Raj Persaud sums it up well: “narcissists do not respond well to social exclusion”.

Others have highlighted the many options still available for Trump fans to congregate since Parler’s departure, which was used to communicate plans ahead of the siege at Capitol. Gab is one platform many Trump supporters have flocked to.

It’s important to remember hate speech, racism and violence predate the internet. Those who are predisposed to these ideologies will find a way to connect with others like them.

And censorship likely won’t change their beliefs, since extremist ideologies and conspiracies tend to be heavily spurred on by confirmation bias. This is when people interpret information in a way that reaffirms their existing beliefs.

When Twitter took action to limit QAnon content last year, some followers took this as confirmation of the conspiracy, which claims Satan-worshipping elites from within government, business and media are running a “deep state” against Trump.

Social media and white supremacy: a love story

The promotion of violence and hate speech on platforms isn’t new, nor is it restricted to relatively fringe sites such as Parler.

Queensland University of Technology Digital Media lecturer Ariadna Matamoros-Fernández describes online hate speech as “platformed racism”. This framing is critical, especially in the case of Trump and his followers.

It recognises social media has various algorithmic features which allow for the proliferation of racist content. It also captures the governance structures that tend to favour “free speech” over the safety of vulnerable communities online.

For instance, Matamoros-Fernández’s research found in Australia, platforms such as Facebook “favoured the offenders over Indigenous people” by tending to lean in favour of free speech.

Other research has found Indigenous social media users regularly witness and experience racism and sexism online. My own research has also revealed social media helps proliferate hate speech, including racism and other forms of violence.

On this front, tech companies are unlikely to take action on the scale required, since controversy is good for business. Simply, there’s no strong incentive for platforms to tackle the issues of hate speech and racism — not until not doing so negatively impacts profits.

After Facebook indefinitely banned Trump, its market value reportedly dropped by US$47.6 billion as of Wednesday, while Twitter’s dropped by US$3.5 billion.




Read more:
Profit, not free speech, governs media companies’ decisions on controversy


The need for a paradigm shift

When it comes to imagining a future with less hate, racism and violence, a key mistake is looking for solutions within the existing structure.

Today, online media is an integral part of the structure that governs society. So we look to it to solve our problems.

But banning Trump won’t silence him or the ideologies he peddles. It will not suppress hate speech or even reduce the capacity of individuals to incite violence.

Trump’s presidency will end in the coming days, but extremist groups and the broader movement they occupy will remain, both in real life and online.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


The Conversation


Bronwyn Carlson, Professor, Indigenous Studies, Macquarie University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

No, Twitter is not censoring Donald Trump. Free speech is not guaranteed if it harms others



Alex Brandon/AP

Katharine Gelber, The University of Queensland

The recent storming of the US Capitol has led a number of social media platforms to remove President Donald Trump’s account. In the case of Twitter, the ban is permanent. Others, like Facebook, have taken him offline until after President-elect Joe Biden’s inauguration next week.

This has led to a flurry of commentary in the Australian media about “free speech”. Treasurer Josh Frydenburg has said he is “uncomfortable” with Twitter’s removal of Trump, while the acting prime minister, Michael McCormack, has described it as “censorship”.

Meanwhile, MPs like Craig Kelly and George Christensen continue to ignore the evidence and promote misinformation about the nature of the violent, pro-Trump mob that attacked the Capitol.

A growing number of MPs are also reportedly calling for consistent and transparent rules to be applied by online platforms in a bid to combat hate speech and other types of harmful speech.

Some have conflated this effort with the restrictions on Trump’s social media usage, as though both of these issues reflect the same problem.

Much of this commentary is misguided, wrong and confusing. So let’s pull it apart a bit.

There is no free speech “right” to incite violence

There is no free speech argument in existence that suggests an incitement of lawlessness and violence is protected speech.

Quite to the contrary. Nineteenth century free speech proponent John Stuart Mill argued the sole reason one’s liberty may be interfered with (including restrictions on free speech) is “self-protection” — in other words, to protect people from harm or violence.




Read more:
Parler: what you need to know about the ‘free speech’ Twitter alternative


Additionally, incitement to violence is a criminal offence in all liberal democratic orders. There is an obvious reason for this: violence is harmful. It harms those who are immediately targeted (five people died in the riots last week) and those who are intimidated as a result of the violence to take action or speak up against it.

It also harms the institutions of democracy themselves, which rely on elections rather than civil wars and a peaceful transfer of power.

To suggest taking action against speech that incites violence is “censoring” the speaker is completely misleading.

There is no free speech “right” to appear on a particular platform

There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.

It is ludicrous to suggest there is. If this “right” were to exist, it would mean any citizen could demand to have their opinions aired on the front page of the Sydney Morning Herald and, if refused, claim their free speech had been violated.




Read more:
Trump’s Twitter tantrum may wreck the internet


What does exist is a general right to express oneself in public discourse, relatively free from regulation, as long as one’s speech does not harm others.

Trump still possesses this right. He has a podium in the West Wing designed for this specific purpose, which he can make use of at any time.

Were he to do so, the media would cover what he says, just as they covered his comments prior to, during and immediately after the riots. This included him telling the rioters that he loved them and that they were “very special”.

Trump told his supporters before the Capitol was overrun: ‘if you don’t fight like hell, you’re not going to have a country anymore’.
Jacquelyn Martin/AP

Does the fact he’s the president change this?

In many free speech arguments, political speech is accorded a higher level of protection than other forms of speech (such as commercial speech, for example). Does the fact this debate concerns the president of the United States change things?

No, it does not. There is no doubt Trump has been given considerable leeway in his public commentary prior to — and during the course of — his presidency. However, he has now crossed a line into stoking imminent lawlessness and violence.

This cannot be protected speech just because it is “political”. If this was the case, it would suggest the free speech of political elites can and should have no limits at all.

Yet, in all liberal democracies – even the United States which has the strongest free speech protection in the world – free speech has limits. These include the incitement of violence and crime.

Are social media platforms over-censoring?

The last decade or so has seen a vigorous debate over the attitudes and responses of social media platforms to harmful speech.

The big tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.

However, platforms have recently started to respond to demands and public pressure to address the harms of the speech they facilitate – from countering violent extremism to fake accounts, misinformation, revenge porn and hate speech.

They have developed community standards for content moderation that are publicly available. They release regular reports on their content moderation processes.

Facebook has even created an independent oversight board to arbitrate disputes over their decision making on content moderation.

They do not always do very well at this. One of the core problems is their desire to create algorithms and policies that are applicable universally across their global operations. But such a thing is impossible when it comes to free speech. Context matters in determining whether and under what circumstances speech can harm. This means they make mistakes.




Read more:
Why the business model of social media giants like Facebook is incompatible with human rights


Where to now?

The calls by MPs Anne Webster and Sharon Claydon to address hate speech online are important. They are part of the broader push internationally to find ways to ensure the benefits of the internet can be enjoyed more equally, and that a person’s speech does not silence or harm others.

Arguments about harm are longstanding, and have been widely accepted globally as forming a legitimate basis for intervention.

But the suggestion Trump has been censored is simply wrong. It misleads the public into believing all “free speech” claims have equal merit. They do not.

We must work to ensure harmful speech is regulated in order to ensure broad participation in the public discourse that is essential to our lives — and to our democracy. Anything less is an abandonment of the principles and ethics of governance.The Conversation

Katharine Gelber, Professor of Politics and Public Policy, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Social media giants have finally confronted Trump’s lies. But why wait until there was a riot in the Capitol?


Timothy Graham, Queensland University of Technology

Amid the chaos in the US Capitol, stoked largely by rhetoric from President Donald Trump, Twitter has locked his account, with 88.7 million followers, for 12 hours.

Facebook and Instagram quickly followed suit, locking Trump’s accounts — with 35.2 million followers and 24.5 million, respectively — for at least two weeks, the remainder of his presidency. This ban was extended from 24 hours.

The locks are the latest effort by social media platforms to clamp down on Trump’s misinformation and baseless claims of election fraud.

They came after Twitter labelled a video posted by Trump and said it posed a “risk of violence”. Twitter removed users’ ability to retweet, like or comment on the post — the first time this has been done.

In the video, Trump told the agitators at the Capitol to go home, but at the same time called them “very special” and said he loved them for disrupting the Congressional certification of President-elect Joe Biden’s win.

That tweet has since been taken down for “repeated and severe violations” of Twitter’s civic integrity policy. YouTube and Facebook have also removed copies of the video.

But as people across the world scramble to make sense of what’s going on, one thing stands out: the events that transpired today were not unexpected.

Given the lack of regulation and responsibility shown by platforms over the past few years, it’s fair to say the writing was on the wall.

The real, violent consequences of misinformation

While Trump is no stranger to contentious and even racist remarks on social media, Twitter’s action to lock the president’s account is a first.

The line was arguably crossed by Trump’s implicit incitement of violence and disorder within the halls of the US Capitol itself.

Nevertheless, it would have been a difficult decision for Twitter (and Facebook and Instagram), with several factors at play. Some of these are short-term, such as the immediate potential for further violence.

Then there’s the question of whether tighter regulation could further incite rioting Trump supporters by feeding into their theories claiming the existence of a large-scale “deep state” plot against the president. It’s possible.




Read more:
QAnon believers will likely outlast and outsmart Twitter’s bans


But a longer-term consideration — and perhaps one at the forefront of the platforms’ priorities — is how these actions will affect their value as commercial assets.

I believe the platforms’ biggest concern is their own bottom line. They are commercial companies legally obliged to pursue profits for shareholders. Commercial imperatives and user engagement are at the forefront of their decisions.

What happens when you censor a Republican president? You can lose a huge chunk of your conservative user base, or upset your shareholders.

Despite what we think of them, or how we might use them, platforms such as Facebook, Twitter, Instagram and YouTube aren’t set up in the public interest.

For them, it’s risky to censor a head of state when they know that content is profitable. Doing it involves a complex risk calculus — with priorities being shareholders, the companies’ market value and their reputation.




Read more:
Reddit removes millions of pro-Trump posts. But advertisers, not values, rule the day


Walking a tightrope

The platforms’ decisions to not only force the removal of several of Trump’s posts but also to lock his accounts carries enormous potential loss of revenue. It’s a major and irreversible step.

And they are now forced to keep a close eye on one another. If one appears too “strict” in its censorship, it may attract criticism and lose user engagement and ultimately profit. At the same time, if platforms are too loose with their content regulation, they must weather the storm of public critique.

You don’t want to be the last organisation to make the tough decision, but you don’t necessarily want to be the first, either — because then you’re the “trial balloon” who volunteered to potentially harm the bottom line.

For all major platforms, the past few years have presented high stakes. Yet there have been plenty of opportunities to stop the situation snowballing to where it is now.

From Trump’s baseless election fraud claims to his false ideas about the coronavirus, time and again platforms have turned a blind eye to serious cases of mis- and disinformation.

The storming of the Capitol is a logical consequence of what has arguably been a long time coming.

The coronavirus pandemic illustrated this. While Trump was partially censored by Twitter and Facebook for misinformation, the platforms failed to take lasting action to deal with the issue at its core.

In the past, platforms have cited constitutional reasons to justify not censoring politicians. They have claimed a civic duty to give elected officials an unfiltered voice.

This line of argument should have ended with the “Unite the Right” rally in Charlottesville in August 2017, when Trump responded to the killing of an anti-fascism protester by claiming there were “very fine people on both sides”.

An age of QAnon, Proud Boys and neo-Nazis

While there’s no silver bullet for online misinformation and extremist content, there’s also no doubt platforms could have done more in the past that may have prevented the scenes witnessed in Washington DC.

In a crisis, there’s a rush to make sense of everything. But we need only look at what led us to this point. Experts on disinformation have been crying out for platforms to do more to combat disinformation and its growing domestic roots.

Now, in 2021, extremists such as neo-Nazis and QAnon believers no longer have to lurk in the depths of online forums or commit lone acts of violence. Instead, they can violently storm the Capitol.

It would be a cardinal error to not appraise the severity and importance of the neglect that led us here. In some ways, perhaps that’s the biggest lesson we can learn.


This article has been updated to reflect the news that Facebook and Instagram extended their 24-hour ban on President Trump’s accounts.The Conversation

Timothy Graham, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Calls for an ABC-run social network to replace Facebook and Google are misguided. With what money?



shutterstock.

Fiona R Martin, University of Sydney

If Facebook prevented Australian news from being shared on its platform, could the ABC start its own social media service to compensate? While this proposal from the Australia Institute is a worthy one, it’s an impossible ask in the current political climate.

The suggestion is one pillar of the think tank’s new Tech-xit report.

The report canvasses what the Australian government should do if Facebook and Google withdraw their news-related services from Australia, in reaction to the Australian Competition and Consumer Commission’s draft news media bargaining code.

Tech-xit rightly notes the ABC is capable of building social media that doesn’t harvest Australians’ personal data. However, it overlooks the costs and challenges of running a social media service — factors raised in debate over the new code.

Platforms react (badly) to the code

The ACCC’s code is a result of years of research into the effects of platform power on Australian media.

It requires Facebook and Google to negotiate with Australian news businesses about licensing payments for hosting news excerpts, providing access to news user data and information on pending news feed algorithm changes.

Predictably, the tech companies are not happy. They argue they make far less from news than the ACCC estimates, have greater costs and return more benefit to the media.

If the code becomes law, Facebook has threatened to stop Australian users from sharing local or international news. Google notified Australians its free services would become “at risk”, although it later said it would negotiate if the draft law was changed in its favour.

Facebook’s withdrawal, which the Tech-xit report sees as being likely if the law passes, would reduce Australians’ capacity to share vital news about their communities, activities and businesses.




Read more:
If Facebook really pulls news from its Australian sites, we’ll have a much less compelling product


ABC to the rescue?

Cue the ABC then, says Jordan Guiao, the report’s author. Guiao is the former head of social media for both the ABC and SBS, and now works at the institute’s Centre for Responsible Technology.

He argues that, if given the funding, ABC Online could reinvent itself to become a “national social platform connecting everyday Australians”. He says all the service would have to do is add

distinct user profiles, user publishing and content features, group connection features, chat, commenting and interactive discussion capabilities.

As a trusted information source, he proposes the ABC could enable “genuine exchange and influence on decision making” and “provide real value to local communities starved of civic engagement”.

Financial reality check

It’s a bold move to suggest the ABC could start yet another major network when it has just had to cut A$84 million from its budget and lose more than 200 staff.

The institute’s idea is very likely an effort to persuade the Morrison government it should redirect some of that funding back to Aunty, which has a history of digital innovation with ABC Online, iView, Q&A and the like.

However, the government has repeatedly denied it has cut funding to the national broadcaster. It hasn’t provided
catch-up emergency broadcasting funds since the ABC covered our worst ever fire season. This doesn’t bode well for a change of mind on future allocations.

The government also excluded the ABC and SBS as beneficiaries of the news media bargaining code negotiations.

The ABC doesn’t even have access to start-up venture capital the way most social media companies do. According to Crunchbase, Twitter and Reddit — the two most popular news-sharing platforms after Facebook — have raised roughly US$1.5 billion and US$550 million respectively in investment rounds, allowing them to constantly innovate in service delivery.

Operational challenges

In contrast, over the past decade, ABC Online has had to reduce many of the “social” services it once offered. This is largely due to the cost of moderating online communities and managing user participation.

Illustration of person removing a social media post.
Social media content moderation requires an abundance of time, money and human resources.
Shutterstock

First news comments sections were canned, and online communities such as the Four Corners forums and The Drum website were closed.

Last year, the ABC’s flagship site for regional and rural user-created stories, ABC Open, was also shut down.

Even if the government were to inject millions into an “ABC Social”, it’s unlikely the ABC could deal with the problems of finding and removing illegal content at scale.

It’s an issue that still defeats social media platforms and the ABC does not have machine learning expertise or funds for an army of outsourced moderators.

The move would also expose the ABC to accusations it was crowding out private innovation in the platform space.

A future without Facebook

It’s unclear whether Facebook will go ahead with its threat of preventing Australian users from sharing news on its platform, given the difficulties with working out exactly who an Australian user is.

For instance, the Australian public includes dual citizens, temporary residents, international students and business people, and expatriates.

If it does, why burden the ABC with the duty to recreate social media? Facebook’s withdrawal could be a boon for Twitter, Reddit and whatever may come next.

In the meantime, if we restored the ABC’s funding, it could develop more inventive ways to share local news online that can’t be threatened by Facebook and Google.




Read more:
Latest $84 million cuts rip the heart out of the ABC, and our democracy


The Conversation


Fiona R Martin, Associate Professor in Convergent and Online Media, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Twitter is banning political ads – but the real battle for democracy is with Facebook and Google



Twitter should get credit for its sensible move, but the microblogging company is tiny compared to Facebook and Google.
Shutterstock

Johan Lidberg, Monash University

Finally, some good news from the weirdo-sphere that is social media. Twitter CEO Jack Dorsey has announced that, effective November 22, the microblogging platform will ban all political advertising – globally.

This is a momentous move by Twitter. It comes when Facebook and its CEO Mark Zuckerberg are under increasing pressure to deal with the amount of mis- and disinformation published via paid political advertising on Facebook.

Zuckerberg recently told a congress hearing Facebook had no plans of fact-checking political ads, and he did not answer a direct question from Congresswoman Alexandria Ocasio-Cortez if Facebook would take down political ads found to be untrue. Not a good look.

A few days after Zuckerberg’s train wreck appearance before the congress committee, Twitter announced its move.




Read more:
Merchants of misinformation are all over the internet. But the real problem lies with us


While Twitter should get credit for its sensible move, the microblogging company is tiny compared to Facebook and Google. So, until the two giants change, Twitter’s political ad ban will have little effect on elections around the globe.

A symptom of the democratic flu

It’s important to call out Google on political advertising. The company often manages to fly under the radar on this issue, hiding behind Facebook, which takes most of the flack.

The global social media platforms are injecting poison into liberal democratic systems around the globe. The misinformation and outright lies they allow to be published on their platforms is partly responsible for the increasingly bitter deep partisan divides between different sides of politics in most mature liberal democracies.

Add to this the micro targeting of voters illustrated by the Cambridge Analytica scandal, and a picture emerges of long-standing democratic systems under extreme stress. This is clearly exemplified by the UK parliament’s paralysis over Brexit and the canyon-deep political divides in the US.




Read more:
Why you should talk to your children about Cambridge Analytica


Banning political advertising only deals with a symptom of the democratic flu the platforms are causing. The root cause of the flu is the fact social media platforms are no longer only platforms – they are publishers.

Until they acknowledge this and agree to adhere to the legal and ethical frameworks connected with publishing, our democracies will not recover.

Not platforms, but publishers

Being a publisher is complex and much more expensive than being a platform. You have to hire editorial staff (unless you can create algorithms advanced enough to do editorial tasks) to fact-check, edit and curate content. And you have to become a good corporate citizen, accepting you have social responsibilities.

Convincing the platforms to accept their publisher role is the most long-term and sustainable way of dealing with the current toxic content issue.

Accepting publisher status could be a win-win, where the social media companies rebuild trust with the public and governments by acting ethcially and socially responsibly, stopping the poisoning of our democracies.

Mark Zuckerberg claims Facebook users being able to publish lies and misinformation is a free speech issue. It is not. Free speech is a privilege as well as a right and, like all privileges, it comes with responsibilities and limitations.

Examples of limitations are defamation laws and racial vilification and discrimination laws. And that’s just the legal framework. The strong ethical frame work that applies to publishing should be added to this.

Ownership concentration like never before

Then, there’s the global social media oligopoly issue. Never before in recorded human history have we seen any industry achieve a level of ownership concentration displayed by the social media companies. This is why this issue is so deeply serious. It’s global, it reaches billions and the money and profits involved is staggering.




Read more:
The fightback against Facebook is getting stronger


Facebook co-founder, Chris Hughes, got it absolutely right when he in his New York Times article pointed out the Federal Trade Commission – the US equivalent to the Australian Competition and Consumer Commission – got it wrong when they allowed Facebook to buy Instagram and WhatsApp.

Hughes wants Facebook broken up and points to the attempts from parts of US civil society moving in this direction. He writes:

This movement of public servants, scholars and activists deserves our support. Mark Zuckerberg cannot fix Facebook, but our government can.

Yesterday, I posted on my Facebook timeline for the first time since the Cambridge Analytica scandal broke. I made the point that after Twitter’s announcement, the ball is now squarely in Facebook’s and Google’s courts.

For research and professional reasons, I cannot delete my Facebook account. But I can pledge to not be an active Facebook user until the company grows up and shoulders its social responsibility as an ethical publisher that enhances our democracies instead of undermining them.The Conversation

Johan Lidberg, Associate Professor, School of Media, Film and Journalism, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Australian media regulators face the challenge of dealing with global platforms Google and Facebook



‘Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.’
Shutterstock/Roman Pyshchyk

Terry Flew, Queensland University of Technology

With concerns growing worldwide about the economic power of digital technology giants such as Google and Facebook, there was plenty of interest internationally in Australia’s Digital Platforms Inquiry.

The Australian Competition and Consumer Commission (ACCC) inquiry was seen as undertaking a forensic account of market dominance by digital platforms, and the implications for Australian media and the rights of citizens around privacy and data protection.

The inquiry’s final report, released last month, has been analysed from perspectives such as competition policy, consumer protection and the future of journalism.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


But the major limitation facing the ACCC, and the Australian government, in developing new regulations for digital platforms is jurisdictional authority – given these companies are headquartered in the United States.

More ‘platform neutral’ approach

Among the ACCC’s 23 recommendations is a proposal to reform media regulations to move from the current platform-specific approaches (different rules for television, radio, and print media) towards a “platform-neutral” approach.

This will ensure comparable functions are effectively and consistently regulated:

Digitalisation and the increase in online sources of news and media content highlight inconsistencies in the current sector-specific approach to media regulation in Australia […]

Digital platforms increasingly perform similar functions to media businesses, such as selecting and curating content, evaluating content, and ranking and arranging content online. Despite this, virtually no media regulation applies to digital platforms.

The ACCC’s recommendations to harmonise regulations across different types of media draw on major Australian public enquiries from the early 2010s, such as the Convergence Review and the Australian Law Reform Commission’s review of the national media classification system. These reports identified the inappropriateness of “silo-ised” media laws and regulations in an age of digital convergence.




Read more:
What Australia’s competition boss has in store for Google and Facebook


The ACCC also questions the continued appropriateness of the distinction between platforms and publishers in an age where the largest digital platforms are not simply the carriers of messages circulated among their users.

The report observes that such platforms are increasingly at the centre of digital content distribution. Online consumers increasingly access social news through platforms such as Facebook and Google, as well as video content through YouTube.

The advertising dollar

While the ACCC inquiry focused on the impact of digital platforms on news, we can see how they have transformed the media landscape more generally, and where issues of the wider public good arise.

Their dominance over advertising has undercut traditional media business models. Online now accounts for about 50% of total advertising spend, and the ACCC estimates that 71 cents of every dollar spent on digital advertising in Australia goes to Google or Facebook.

All media are now facing the implications of a more general migration to online advertising, as platforms can better micro-target consumers rather than relying on the broad brush approach of mass media advertising.

The larger issue facing potential competitors to the digital giants is the accumulation of user data. This includes the lack of transparency around algorithmic sorting of such data, and the capacity to use machine learning to apply powerful predictive analytics to “big data”.

In line with recent critiques of platform capitalism, the ACCC is concerned about the lack of information consumers have about what data the platforms hold and how it’s being used.

It’s also concerned the “winner-takes-most” nature of digital markets creates a long term structural crisis for media businesses, with particularly severe implications for public interest journalism.

Digital diversity

Digital platform companies do not sit easily within a recognisable industry sector as they branch across information technology, content media, and advertising.

They’re also not alike. While all rely on the capacity to generate and make use of consumer data, their business models differ significantly.

The ACCC chose to focus only on Google and Facebook, but they are quite different entities.

Google dominates search advertising and is largely a content aggregator, whereas Facebook for the most part provides display advertising that accompanies user-generated social media. This presents its own challenges in crafting a regulatory response to the rise of these digital platform giants.

A threshold issue is whether digital platforms should be understood to be media businesses, or businesses in a more generic sense.

Communications policy in the 1990s and 2000s commonly differentiated digital platforms as carriers. This indemnified them from laws and regulations relating to content that users uploaded onto their sites.

But this carriage/content distinction has always coexisted with active measures on the part of the platform companies to manage content that is hosted on their sites. Controversies around content moderation, and the legal and ethical obligations of platform providers, have accelerated greatly in recent years.

To the degree that companies such as Google and Facebook increasingly operate as media businesses, this would bring aspects of their activities within the regulatory purview of the Australian Communication and Media Authority (ACMA).

The ACCC recommended ACMA should be responsible for brokering a code of conduct governing commercial relationships between the digital platforms and news providers.




Read more:
Consumer watchdog: journalism is in crisis and only more public funding can help


This would give it powers related to copyright enforcement, allow it to monitor how platforms are acting to guarantee the trustworthiness and reliability of news content, and minimise the circulation of “fake news” on their sites.

Overseas, but over here

Companies such as Google and Facebook are global companies, headquartered in the US, for whom Australia is a significant but relatively small market.

The capacity to address competition and market dominance issues is limited by the fact real action could only meaningfully occur in their home market of the US.

Australian regulators are going to need to work closely with their counterparts in other countries and regions: the US and the European Union are the two most significant in this regard.The Conversation

Terry Flew, Professor of Communication and Creative Industries, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.