Here’s what a privacy policy that’s easy to understand could look like



File 20180606 137315 1d8kz0n.jpg?ixlib=rb 1.1
We need a simple system for categorising data privacy settings, similar to the way Creative Commons specifies how work can be legally shared.
Shutterstock

Alexander Krumpholz, CSIRO and Raj Gaire, CSIRO

Data privacy awareness has recently gained momentum, thanks in part to the Cambridge Analytica data breach and the introduction of the European Union’s General Data Protection Regulation (GDPR).

One of the key elements of the GDPR is that it requires companies to simplify their privacy related terms and conditions (T&Cs) so that they are understandable to the general public. As a result, companies have been rapidly updating their terms and conditions (T&Cs), and notifying their existing users.




Read more:
Why your app is updating its privacy settings and how this will affect businesses


On one hand, these new T&Cs are now simplified legal documents. On the other hand, they are still too long. Unfortunately, most of us have still skipped reading those documents and simply clicked “accept”.

Wouldn’t it be nice if we could specify our general privacy preferences in our devices, have them check privacy policies when we sign up for apps, and warn us if the agreements overstep?

This dream is achievable.

Creative Commons as a template

For decades, software was sold or licensed with Licence Agreements that were several pages long, written by lawyers and hard to understand. Later, software came with standardised licences, such as the GNU General Public Licence, Berkeley Software Distribution, or The Apache License. Those licences define users’ rights in different use cases and protect the provider from liabilities.

However, they were still hard to understand.

With the foundation of Creative Commons (CC) in 2001, a simplified licence was developed that reduced complex legal copyright agreements to a small set of copyright classes.

These licences are represented by small icons and short acronyms, and can be used for images, music, text and software. This helps creative users to immediately recognise how – or whether – they can use the licensed content in their own work.




Read more:
Explainer: Creative Commons


Imagine you have taken a photo and want to share it with others for non-commercial purposes only, such as to illustrate a story on a not-for-profit news website. You could licence your photo as CC BY-NC when uploading it to Flickr. In Creative Commons terms, the abbreviation BY (for attribution) requires the user to cite the owner and NC (non-commercial) restricts the use to non-commercial applications.

Internet search engines will index these attributes with the files. So, if I search for photos explicitly licensed with those restrictions, via Google for example, I will find your photo. This is possible because even the computers can understand these licences.

We need to develop Privacy Commons

Similar to Creative Commons licences under which creative content is given to others, we need Privacy Commons by which companies can inform users how they will use their data.

The Privacy Commons need to be legally binding, simple for people to understand and simple for computers to understand. Here are our suggestions for what a Privacy Commons might look like.

We propose that the Privacy Commons classifications cover at least three dimensions of private data: collection, protection, and spread.

What data is being collected?

This dimension is to specify what level of personal information is collected from the user, and is therefore at risk. For example, name, email, phone number, address, date of birth, biometrics (including photos), relationships, networks, personal preferences, and political opinions. The could be categorised at different levels of sensitivities.

How is your data protected?

This dimension specifies:

  • where your data stored – within an app, in one server, or in servers at multiple locations
  • how it is stored and transported – whether it is plain text or encrypted
  • how long the data is kept for – days, months, years or permanently
  • how the access to your data controlled within the organisation – this indicates the protection of your data against potentially malicious actors like hackers.

How is your data spread?

In other words, who is your data shared with? This dimension tells you whether or not the data is shared with third parties. If the data is shared, will it be de-identified appropriately? Is it shared for research purposes, or sold for commercial purposes? Are there any further controls in place after the data is shared? Will it be deleted by the third party when the user deletes it at the primary organisation?




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


Privacy Commons will help companies think about user privacy before offering services. It will also help solve the problem of communication about privacy in the same way that Creative Commons is solving the problems of licensing for humans and computers. Similar ideas have been discussed in the past, such as Mozilla. We need to revisit those thoughts in the contemporary context of the GDPR.

Such a system would allow you to specify Privacy Commons settings in the configuration of your children’s devices, so that only appropriate apps can be installed. Privacy Commons could also be applied to inform you about the use of your data gathered for other purposes like loyalty rewards cards, such as FlyBuys.

Of course, Privacy Commons will not solve everything.

For example, it will still be a challenge to address concerns about third party personal data brokers like Acxiom or Oracle collecting, linking and selling our data without most of us even knowing.

The ConversationBut at least it will be a step in the right direction.

Alexander Krumpholz, Senior Experimental Scientist, CSIRO and Raj Gaire, Senior Experimental Scientist, CSIRO

This article was originally published on The Conversation. Read the original article.

Advertisements

Trolls, fanboys and lurkers: understanding online commenting culture shows us how to improve it



File 20180524 117628 k4li3d.jpg?ixlib=rb 1.1
The way user interfaces are designed can impact the kind of community that gathers.
Shutterstock

Renee Barnes, University of the Sunshine Coast

Do you call that a haircut? I hope you didn’t pay for it.

Oh please this is rubbish, you’re a disgrace to yourself and your profession.

These are just two examples of comments that have followed articles I have written in my career. While they may seem benign compared with the sort of violent and vulgar comments that are synonymous with cyberbullying, they are examples of the uncivil and antisocial behaviour that plagues the internet.

If these comments were directed at me in any of my interactions in everyday life – when buying a coffee or at my monthly book club – they would be incredibly hurtful and certainly not inconsequential.

Drawing on my own research, as well as that of researchers in other fields, my new book “Uncovering Online Commenting Culture: Trolls, Fanboys and Lurkers” attempts to help us understand online behaviours, and outlines productive steps we can all take towards creating safer and kinder online interactions.




Read more:
Rude comments online are a reality we can’t get away from


Steps we all can take

Online abuse is a social problem that just happens to be powered by technology. Solutions are needed that not only defuse the internet’s power to amplify abuse, but also encourage crucial shifts in social norms and values within online communities.

Recognise that it’s a community

The first step is to ensure we view our online interactions as an act of participation in a community. What takes place online will then begin to line up with our offline interactions.

If any of the cruel comments that often form part of online discussion were said to you in a restaurant, you would expect witnesses around you to support you. We must have the same expectations online.

Know our audience

We learn to socialise offline based on visual and verbal cues given by the people with whom we interact. When we move social interactions to an online space where those cues are removed or obscured, a fundamental component of how we moderate our own behaviour is also eliminated. Without these social cues, it’s difficult to determine whether content is appropriate.

Research has shown that most social media users imagine a very different audience to the actual audience reading their updates. We often imagine our audience as people we associate with regularly offline, however a political statement that may be supported by close family and friends could be offensive to former colleagues in our broader online network.

Understand our own behaviour

Emotion plays a role in fuelling online behaviour – emotive comments can inspire further emotive comments in an ongoing feedback loop. Aggression can thus incite aggression in others, but it can also establish a behavioural norm within the community that aggression is acceptable.




Read more:
How empathy can make or break a troll


Understanding our online behaviour can help us take an active role in shaping the norms and values of our online communities by demonstrating appropriate behaviour.

It can also inform education initiatives for our youngest online users. We must teach them to remain conscious of the disjuncture between our imagined audience and the actual audience, thereby ingraining productive social norms for generations to come. Disturbingly, almost 70% of those aged between 18 and 29 have experienced some form of online harassment, compared with one-third of those aged 30 and older.

What organisations and institutions can do

That is not to say that we should absolve the institutions that profit from our online interactions. Social networks such as Facebook and Twitter also have a role to play.

User interface design

Design of user interfaces impacts on the ease with which we interact, the types of individuals who comment, and how we will behave.

Drawing on psychological research, we can link particular personality traits with antisocial behaviour online. This is significant because simple changes to the interfaces we use to communicate can influence which personality types will be inclined to comment.

Using interface design to encourage participation from those who will leave positive comments, and creating barriers for those inclined to leave abusive ones, is one step that online platforms can take to minimise harmful behaviours.

For example, those who are highly agreeable prefer anonymity when communicating online. Therefore, eliminating anonymity on websites (an often touted response to hostile behaviour) could discourage those agreeable individuals who would leave more positive comments.

Moderation policies

Conscientious individuals are linked to more pro-social comments. They prefer high levels of moderation, and systems where quality comments are highlighted or ranked by other users.

Riot Games, publisher of the notorious multiplayer game League of Legends, has had great success in mitigating offensive behaviour by putting measures in place to promote the gaming community’s shared values. This included a tribunal of players who could determine punishment for people involved in uncivilised behaviour.

Analytics and reporting

Analytical tools, visible data on who visits a site, and a real-time guide to who is reading comments can help us configure a more accurate imagining of our audience. This could help eliminate the risk of unintentional offence.

Providing clear processes for reporting inappropriate behaviour, and acting quickly to punish it, will also encourage us to take an active role in cleaning up our online communities.




Read more:
How we can keep our relationships during elections: don’t talk politics on social media


We can and must expect more of our online interactions. Our behaviour and how we respond to the behaviour of others within these communities will contribute to the shared norms and values of an online community.

The ConversationHowever, there are institutional factors that can affect the behaviours displayed. It is only through a combination of both personal and institutional responses to antisocial behaviour that we will create more inclusive and harmonious online communities.

Renee Barnes, Senior Lecturer, Journalism, University of the Sunshine Coast

This article was originally published on The Conversation. Read the original article.

94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour



File 20180514 34038 10eli61.jpg?ixlib=rb 1.1
It would take the average person 244 hours per year (6 working weeks) to read all privacy policies that apply to them.
Shutterstock

Katharine Kemp, UNSW

Australians are agreeing to privacy policies they are not comfortable with and would like companies only to collect data that is essential for the delivery of their service. That’s according to new, nation-wide research on consumer attitudes to privacy policies released by the Consumer Policy Research Centre (CPRC) today.

These findings are particularly important since the government’s announcement last week that it plans to implement “open banking” (which gives consumers better access to and control over their banking data) as the first stage of the proposed “consumer data right” from July 2019.




Read more:
How not to agree to clean public toilets when you accept any online terms and conditions


Consumer advocates argue that existing privacy regulation in Australia needs to be strengthened before this new regime is implemented. In many cases, they say, consumers are not truly providing their “informed consent” to current uses of their personal information.

While some blame consumers for failing to read privacy policies, I argue that not reading is often rational behaviour under the current consent model. We need improved standards for consent under our Privacy Act as a first step in improving data protection.

Australians are not reading privacy policies

Under the Privacy Act, in many cases, the collection, use or disclosure of personal information is justified by the individual’s consent. This is consistent with the “notice and choice” model for privacy regulation: we receive notice of the proposed treatment of our information and we have a choice about whether to accept.

But according to the CPRC Report, most Australians (94%) do not read all privacy policies that apply to them. While some suggest this is because we don’t care about our privacy, there are four good reasons why people who do care about their privacy don’t read all privacy policies.

https://datawrapper.dwcdn.net/hJXfh/1/

We don’t have enough time

There are many privacy policies that apply to each of us and most are lengthy. But could we read them all if we cared enough?

According to international research, it would take the average person 244 hours per year (six working weeks) to read all privacy policies that apply to them, not including the time it would take to check websites for changes to these policies. This would be an impossible task for most working adults.

Under our current law, if you don’t have time to read the thousands of words in the policy, your consent can be implied by your continued use of the website which provides a link to that policy.

We can’t understand them

According to the CPRC, one of the reasons users typically do not read policies is that they are difficult to comprehend.

Very often these policies lead with feel-good assurances “We care about your privacy”, and leave more concerning matters to be discovered later in vague, open-ended terms, such as:

…we may collect your personal information for research, marketing, for efficiency purposes…

In fact, the CPRC Report states around one in five Australians:

…wrongly believed that if a company had a Privacy Policy, it meant they would not share information with other websites or companies.




Read more:
Consent and ethics in Facebook’s emotional manipulation study


We can’t negotiate for better terms

We generally have no ability to negotiate about how much of our data the company will collect, and how it will use and disclose it.

According to the CPRC Report, most Australians want companies only to collect data that is essential for the delivery of their service (91%) and want options to opt out of data collection (95%).

However, our law allows companies to group into one consent various types and uses of our data. Some are essential to providing the service, such as your name and address for delivery, and some are not, such as disclosing your details to “business partners” for marketing research.

These terms are often presented in standard form, on a take-it-or-leave-it basis. You either consent to everything or refrain from using the service.

https://datawrapper.dwcdn.net/L7fPF/2/

We can’t avoid the service altogether

According to the CPRC, over two thirds of Australians say they have agreed to privacy terms with which they are not comfortable, most often because it is the only way to access the product or service in question.

In a 2017 report, the Productivity Commission expressed the view that:

… even in sectors where there are dominant firms, such as social media, consumers can choose whether or not to use the class of product or service at all, without adversely affecting their quality of life.

However, in many cases, we cannot simply walk away if we don’t like the privacy terms.

Schools, for example, may decide what apps parents must use to communicate about their children. Many jobs require people to have Facebook or other social media accounts. Lack of transparency and competition in privacy terms also means there is often little to choose between rival providers.

We need higher standards for consent

There is frequently no real notice and no real choice in how our personal data is used by companies.

The EU General Data Protection Regulation (GDPR), which comes into effect on 25 May 2018, provides one model for improved consent. Under the GDPR, consent:

… should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


The Privacy Act should be amended along these lines to set higher standards for consent, including that consent should be:

  • explicit and require action on the part of the customer – consent should not be implied by the mere use of a website or service and there should be no pre-ticked boxes. Privacy should be the default;

  • unbundled – individuals should be able to choose to consent only to the collection and use of data essential to the delivery of the service, with separate choices of whether to consent to additional collections and uses;

  • revocable – the individual should have the option to withdraw their consent in respect of future uses of their personal data at any time.

The ConversationWhile further improvements are needed, upgrading our standards for consent would be an important first step.

Katharine Kemp, Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article was originally published on The Conversation. Read the original article.

Tough new EU privacy regulations could lead to better protections in Australia



File 20180523 117628 nmvce5.jpg?ixlib=rb 1.1
The EU’s General Data Protection Regulation comes into force on May 25.
Shutterstock

Vincent Mitchell, University of Sydney

Major personal data breaches, such as those that occurred recently at the Commonwealth Bank, Cambridge Analytica and Yahoo, have taught us how vulnerable our privacy is.

Like the cigarette and alcohol markets, it took a long time to prove that poorly regulated data collection can do us harm. And as with passive smoking, we now know that data trading can harm those around us as well as ourselves.

Regulators in the European Union are cracking down on the problem with the introduction the new strict General Data Protection Regulation (GDPR) from May 25. The hope is that the new rules will shift the balance of power in the market for data away from companies and back to the owners of that data.




Read more:
Online privacy must improve after the Facebook data uproar


The GDPR applies to companies who trade in the EU or process the data of people in the EU. This includes some of Australia’s biggest companies, such as the Commonwealth Bank and Bunnings Warehouse. Since companies that don’t operate in the EU or process the data of people in the EU aren’t required to comply, Australian consumers could soon be facing a two-tier system of privacy protections.

That isn’t all bad news. By choosing to deal with companies with better data protection policies, Australian consumers can create pressure for change in how personal data is handled across the board.

How the GDPR empowers consumers

The GDPR makes it clearer what companies should be doing to protect personal data and empowers consumers like never before.

When dealing with companies operating in the EU, you will now have the right to:

  1. access your own data and any derived or inferred data

  2. rectify errors and challenge decisions based on it, including to object to direct marketing

  3. be forgotten and erased in most situations

  4. move your data more easily, such as when changing insurance companies or banks

  5. object to certain types of data processing and challenge significant decisions based purely on profiling, such as for medical insurance or loans

  6. compensation.

This final right will lead to another profound improvement in regulation of the market for personal data.

Consumers as a regulating force

As a result of these new rights and powers, consumers themselves can help regulate company behaviour by monitoring how well they comply with GDPR.

In addition to complaining to authorities, such as the Information Commissioner, when consumers encounter breaches they can complain directly to the company, share stories online and alert fellow users.

This can be powerful – especially when whistleblowers actually work in the industry, as was the case with Cambridge Analytica’s Christopher Wylie.




Read more:
GDPR: ten easy steps all organisations should follow


Companies that don’t protect people’s personal data will face fines from the regulator of up to 4% of global turnover, or €20 million. In addition, they could be required to pay compensation directly to consumers who have asked investigating authorities to claim on their behalf.

This potentially means that all those millions of EU citizens who were caught up in the Facebook Cambridge Analytica scandal could, in the future, be able to sue Facebook.

From the viewpoint of empowering and motivating consumers to monitor what companies do with their data, this is a momentous change.

A shift in our expectations of data privacy

The way things currently stand, there is an imbalance in the personal data market. Companies take all the profit from our personal data, yet we pay the price as individuals, or as a society, for privacy breaches.

But as a result of GDPR, we are likely to see expectations of how companies should act begin to shift. This will create pressure for change.

You’ve probably already been sent notifications from companies asking you to re-consent to their privacy policies. This is because GDPR expects consent to be more explicit and active – default settings and pre-checked boxes are considered inadequate.

Consumers should also expect companies to make it just as easy to withdraw consent as it is to give it.




Read more:
Why your app is updating its privacy settings and how this will affect businesses


Unlike New Zealand, which has strong privacy laws, personal data protections in Australia – and the massive data markets of BRIC countries – are not considered “adequate”, and fall below EU standards.

Consumers should be wary of vested interest arguments, such as Facebook’s claim that it just wants to connect people. To use an analogy, that’s comparable to an alcohol manufacturer saying it just wants people to have a good time, without highlighting the potential risks of alcohol use.

The ConversationIf you want these greater rights and protections, now is the perfect time to lobby your Members of Parliament and demand the best available protection from all the companies you deal with.

Vincent Mitchell, Professor of Marketing, University of Sydney

This article was originally published on The Conversation. Read the original article.

The ethics of ‘securitising’ Australian cyberspace


Dr Shannon Brandt Ford, Curtin University

This article is the fifth in a five-part series exploring Australian national security in the digital age. Read parts one, two, three and four here.


As technology evolves and Australia becomes ever-more reliant on cyber systems throughout government and society, the threats that cyber attacks pose to the country’s national security are real – and significant.

Cyber weapons now exist that can be used to attack and exploit vulnerabilities in Australia’s national infrastructure. Many of the cyber threats that exist now, such as defacing a website, are not that serious.

But more nefarious attacks on software systems have the potential to damage critical infrastructure and threaten people’s lives.




Read more:
Since Boston bombing, terrorists are using new social media to inspire potential attackers


The Australian Cyber Security Centre (ACSC) Threat Report addresses these concerns every year, highlighting the ubiquitous nature of cyber-crime in Australia, the potential for cyber-terrorism, and the vulnerability of data stored on government and commercial networks.

Governments now take these types of threats so seriously, they speak of the potential for military responses to cyber-attacks in the future. As one US military official told The Wall Street Journal:

If you shut down our power grid, maybe we will put a missile down one of your smokestacks.

A securitised internet

Such concerns have been a key part of Australia’s ambitions to revamp its national security to respond to future cyber-threats. Australia’s Cyber Security Strategy, for instance, states that:

all of us – governments, businesses and individuals – need to work together to build resilience to cybersecurity threats and to make the most of opportunities online.

An important ethical concern with such a focus, however, is the risk that Australia’s cyberspace becomes “securitised”.

When we securitise an issue, we frame the activity as being conducted in a state of emergency. A state of emergency is when a government temporarily changes the conditions of its political and social institutions in response to a particularly serious emergency. This might be a natural disaster, war or rioting, for example. Importantly, due process constraints on government officials, such as habeas corpus, are suspended.

An ethical problem with a securitised or militarised cyberspace, especially if it becomes a permanent measure, is that it can quickly erode fundamental human rights such as privacy and freedom of speech.

Ethical problems in a brave new world

For instance, what are the ethical implications of conducting military activities against terrorist propaganda online, by conducting psychological operations on social media platforms, say, or simply shutting them down?

Using social media in this way would be counter to the social and civil function of these channels of communication. Trying to deny audiences the ability to speak freely on social media could also undermine the internet’s effectiveness as a tool for social and economic good. This is especially problematic in Australia, where fundamental human rights such as privacy and freedom of speech are taken for granted as fundamental civic values.

There is also potential for a militarised cyberspace to increase the likelihood of conflict between states. As cyber-attacks are a relatively new threat, it’s unclear what actions might lead to escalation and constitute an act of war.

The perception that cyber-attacks are not as harmful as, say, a missile attack could lead to their increased use. This opens the door to potentially more serious forms of conflict.




Read more:
The Cyber Security Strategy is only a small step in the right direction


Another important ethical consideration is the enhanced government surveillance of a securitised internet. The fall-out from the Edward Snowden disclosures, for instance, revealed the intrusiveness of US security agencies’ activities online. This in turn had the effect of undermining the public’s trust in the government.

Such a loss of trust in one segment of the government can have potentially dire impacts on other areas. For example, in response to public suspicions of the actions of security agencies, governments might overreact and cut worthwhile surveillance programmes. Or disgruntled government employees (like Snowden) might leak other types of confidential or sensitive information to the detriment of the public good.

A recent example of this occurred when highly sensitive correspondences between Home Affairs Secretary Mike Pezzullo and Defence Secretary Greg Moriarty were leaked to the media. The communications detailed plans to give the Australian Signals Directorate new domestic surveillance powers. Mark Dreyfus, the national security shadow minister, labelled the leak, “a deeply worrying signal of internal struggles.”

So it is important that Australian government agencies tasked with managing national security in cyberspace consistently act in a trustworthy manner. As such, there should be guarantees that decisions related to cyber-security oversight and governance are not driven by short-term political gains.

In particular, government decision-makers should seek to promote an informed and public debate about the standards required for “minimum transparency, accountability and oversight of government surveillance practices.”

The ConversationAnything short of that could make the country’s cyber-infrastructure less secure – a frightening prospect in an increasingly hostile and volatile digital world.

Dr Shannon Brandt Ford, Lecturer, Curtin University

This article was originally published on The Conversation. Read the original article.

How information warfare in cyberspace threatens our freedom



File 20180509 34024 rhe9bv.jpg?ixlib=rb 1.1
Information warfare in cyberspace could replace reason and reality with rage and fantasy.
Shutterstock

Roger Bradbury, Australian National University; Anne-Marie Grisogono, Crawford School of Public Policy, Australian National University; Dmitry Brizhinev, Australian National University; John Finnigan, CSIRO, and Nicholas Lyall, Australian National University

This article is the fourth in a five-part series exploring Australian national security in the digital age. Read parts one, two and three here.


Just as we’ve become used to the idea of cyber warfare, along come the attacks, via social media, on our polity.

We’ve watched in growing amazement at the brazen efforts by the Russian state to influence the US elections, the UK’s Brexit referendum and other democratic targets. And we’ve tended to conflate them with the seemingly-endless cyber hacks and attacks on our businesses, governments, infrastructure, and a long-suffering citizenry.

But these social media attacks are a different beast altogether – more sinister, more consequential and far more difficult to counter. They are the modern realisation of the Marxist-Leninist idea that information is a weapon in the struggle against Western democracies, and that the war is ongoing. There is no peacetime or wartime, there are no non-combatants. Indeed, the citizenry are the main targets.

A new battlespace for an old war

These subversive attacks on us are not a prelude to war, they are the war itself; what Cold War strategist George Kennan called “political warfare”.

Perversely, as US cyber experts Herb Lin and Jaclyn Kerr note, modern communication attacks exploit the technical virtues of the internet such as “high connectivity” and “democratised access to publishing capabilities”. What the attackers do is, broadly speaking, not illegal.

The battlespace for this warfare is not the physical, but the cognitive environment – within our brains. It seeks to sow confusion and discord, to reduce our abilities to think and reason rationally.

Social media platforms are the perfect theatres in which to wage political warfare. Their vast reach, high tempo, anonymity, directness and cheap production costs mean that political messages can be distributed quickly, cheaply and anonymously. They can also be tailored to target audiences and amplified quickly to drown out adversary messages.

Simulating dissimulation

We built simulation models (for a forthcoming publication) to test these ideas. We were astonished at how effectively this new cyber warfare can wreak havoc in the models, co-opting filter bubbles and preventing the emergence of democratic discourse.

We used agent-based models to examine how opinions shift in response to the insertion of strong opinions (fake news or propaganda) into the discourse.

Our agents in these simple models were individuals who each had a set of opinions. We represented different opinions as axes in an opinion space. Individuals are located in the space by the values of their opinions. Individuals close to each other in the opinion space are close to each other in their opinions. Their differences in opinion are simply the distance between them.

When an individual links to a neighbour, they experience a degree of convergence – their opinions are drawn towards each other. An individual’s position is not fixed, but may shift under the influence of the opinions of others.

The dynamics in these models were driven by two conflicting processes:

  • Individuals are social – they have a need to communicate – and they will seek to communicate with others with whom they agree. That is, other individuals nearby in their opinion space.

  • Individuals have a limited number of communication links they can manage at any time (also known as their Dunbar number, and they continue to find links until they satisfy this number. Individuals, therefore, are sometimes forced to communicate with individuals with whom they disagree in order to satisfy their Dunbar number. But if they wish to create a new link and have already reached their Dunbar number, they will prune another link.

Figure 1: The emergence of filter bubbles

Figure 1: Filter bubbles emerging with two dimensions, opinions of issue X and opinions of issue Y.
roger.bradbury@anu.edu.au

To begin, 100 individuals, represented as dots, were randomly distributed across the space with no links. At each step, every individual attempts to link with a near neighbour up to its Dunbar number, perhaps breaking earlier links to do so. In doing so, it may change its position in opinion space.

Over time, individuals draw together into like-minded groups (filter bubbles). But the bubbles are dynamic. They form and dissolve as individuals continue to prune old links and seek newer, closer ones as a result of their shifting positions in the opinion space. Figure 1, above, shows the state of the bubbles in one experiment after 25 steps.

Figure 2: Capturing filter bubbles with fake news

Conversation lobbies figure 2.
roger.bradbury@anu.edu.au

At time step 26, we introduced two pieces of fake news into the model. These were represented as special sorts of individuals that had an opinion in only one dimension of the opinion space and no opinion at all in the other. Further, these “individuals” didn’t seek to connect to other individuals and they never shifted their opinion as a result of ordinary individuals linking to them. They are represented by the two green lines in Figure 2.

Over time (the figure shows time step 100), each piece of fake news breaks down the old filter bubbles and reels individuals towards their green line. They create new tighter filter bubbles that are very stable over time.

Information warfare is a threat to our Enlightenment foundations

These are the conventional tools of demagogues throughout history, but this agitprop is now packaged in ways perfectly suited to the new environment. Projected against the West, this material seeks to increase political polarisation in our public sphere.

Rather than actually change an election outcome, it seeks to prevent the creation of any coherent worldview. It encourages the creation of filter bubbles in society where emotion is privileged over reason and targets are immunised against real information and rational consideration.

These models confirm Lin and Kerr’s hypothesis. “Traditional” cyber warfare is not an existential threat to Western civilisation. We can and have rebuilt our societies after kinetic attacks. But information warfare in cyberspace is such a threat.

The ConversationThe Enlightenment gave us reason and reality as the foundations of political discourse, but information warfare in cyberspace could replace reason and reality with rage and fantasy. We don’t know how to deal with this yet.

Roger Bradbury, Professor, National Security College, Australian National University; Anne-Marie Grisogono, Visiting fellow, Crawford School of Public Policy, Australian National University; Dmitry Brizhinev, Research Assistant, National Security College, Australian National University; John Finnigan, Leader, Complex Systems Science, CSIRO, and Nicholas Lyall, Research Assistant (National Security College), Australian National University

This article was originally published on The Conversation. Read the original article.

Tech giants are battling it out to supply the global internet – here’s why that’s a problem


Claudio Bozzi, Deakin University

The US Federal Communications Commission last month granted Elon Musk’s SpaceX permission to launch 4,425 satellites that will provide affordable high speed broadband internet to consumers.

The Starlink network will be accessible in the US and around the world – including in areas where the internet is currently unavailable or unreliable.

SpaceX isn’t the only company investing in global internet infrastructure. Facebook, Google and Microsoft all have various projects underway to deliver high speed connectivity to remote and rural areas.

It’s all part of a trend of private companies attempting to breach the digital divide and wage a battle for the global internet.




Read more:
Connecting everyone to the internet won’t solve the world’s development problems


But entrusting market forces to build critical internet resources and infrastructure is problematic. These companies aren’t obligated to operate in the interest of consumers. In some cases their practices could serve to further entrench the existing digital divide.

Half the world’s population can’t access the internet

The internet is embedded in social, personal and economic life across the developed world.

But access varies significantly between industrialised nations that boast high per capita incomes, and developing nations with largely poor, rural populations.

For example, 94% of South Korean adults and 93% of Australian adults have access to the internet, compared with just 22% of Indians and 15% of Pakistanis.

https://datawrapper.dwcdn.net/1U1a3/1/

As society becomes increasingly dependent on the internet, nations and communities need equal access. Otherwise legacy inequalities will become further entrenched and new divides will emerge, potentially creating a “permanent underclass”.

Tech giants battle it out

The tech giants have been investing heavily in critical infrastructure in recent years.

Google owns the FASTER trans-Pacific undersea cable link, which has carried data (at 60 terabits per second) between the US, Japan and Taiwan since 2016. Meanwhile, the Microsoft and Facebook funded MAREA trans-Atlantic cable has connected the US to southern Europe (at 160 terabits per second) since in 2017.

New investments centre on atmospheric, stratospheric and satellite delivery strategies.

Along with SpaceX’s constellation of small satellites, Facebook’s internet.org uses atmospheric drones to deliver internet to rural and remote areas. Google’s Project Loon uses high altitude navigable balloons for the same purpose.

//platform.twitter.com/widgets.js

The privatisation of a public good is problematic

Private investors who build infrastructure are driven by commercial imperatives rather than a need to deliver social benefits. And that dynamic can entrench and exacerbate existing – and create new – digital, social and economic divides.

This can be innocuous enough, such as when the company that makes League of Legends built its own internet network to ensure its players weren’t upset by slow speeds.

But it’s more of a problem when faster connections can tilt investment and trading playing fields in favour of those with access, leaving ordinary investors out in the cold.




Read more:
How the internet is failing to drive economic development where promised


Facebook’s Free Basics is a program that aims to provide cheap internet services to consumers in developing countries. It currently operates in 63 developing nations.

Critics say the service is a blatant a strategy to extend Facebook’s global dominance to the developing world. It’s also been accused of violating net neutrality by strictly controlling participating sites to eliminate Facebook’s competitors.

Technology is not neutral

Privately owned and operated internet infrastructure can also become a means of social control.

Termination of internet services is a notorious tactic used by authoritarian regimes to repress dissent by disrupting communication and censoring information. But private entities may also exercise control over infrastructure outside of government regulation.

For example, when WikiLeaks published government correspondence in 2010, Amazon and AnyDNS withdrew the services that maintained the Wikileaks website. Mastercard, Paypal and VISA terminated services through which the organisation received funding for its activities.

These companies were not acting under government direction, citing violations of their Acceptable Use policies to justify their decisions. Harvard professor Yochai Benckler said at the time:

Commercial owners of the critical infrastructures of the networked environment can deny service to controversial speakers, and some appear to be willing to do so at a mere whiff of public controversy.

SpaceX must meet a host of technical conditions before Starlink can be activated. But we shouldn’t assume that providing internet access to developing countries will lead to an ecosystem from which economic or social benefits will flow.

The ConversationWhen the logic of corporate capitalism dominates the provision of internet services, there’s no guarantee that the internet’s founding principles – an egalitarian tool where users share information for the greater good – will be upheld.

Claudio Bozzi, Lecturer in Law, Deakin University

This article was originally published on The Conversation. Read the original article.

The public has a vital role to play in preventing future cyber attacks



File 20180417 101464 vorjds.jpg?ixlib=rb 1.1
Numerous cyber attacks in recent years have targeted common household devices, such as routers.
Shutterstock

Sandeep Gopalan, Deakin University

Up to 400 Australian organisations may have been snared in a massive hacking incident detailed today. The attack, allegedly engineered by the Russian government, targeted millions of government and private sector machines globally via devices such as routers, switches, and firewalls.

This follows a cyber attack orchestrated by Iranian hackers revealed last month, which targeted Australian universities.




Read more:
Explainer: how internet routers work and why you should keep them secure


A joint warning by the US and UK governments stated that the purpose of the most recent attack was to:

… support espionage, extract intellectual property, maintain persistent access to victim networks, and potentially lay a foundation for future offensive operations.

The Russians’ modus operandi was to target end-of-life devices and those without encryption or authentication, thereby compromising routers and network infrastructure. In doing so, they secured legitimate credentials from individuals and organisations with weak password protections in order to take control of the infrastructure.

Cyber attacks are key to modern conflict

This is not the first instance of Russian aggression.

The US city of Atlanta last month was crippled by a cyber attack and many of its systems are yet to recover – including the court system. In that case, attackers used the SamSam ransomware, which also uses network infrastructure to infiltrate IT systems, and demanded a ransom payment in Bitcoin.

Baltimore was hit by a cyber attack on March 28 that disrupted its emergency 911 calling system. Russian hackers are suspected to have taken down the French TV station TV5Monde in 2015. The US Department of State was hacked in 2015 – and Ukraine’s power grid and military infrastructure were also compromised in separate attacks in 2015 and 2017.

But Russia is not alone in committing these attacks.

In December 2017, North Korean hackers were blamed for the WannaCry attack that infected over 300,000 computers in 150 countries, affecting hospitals and banks. The UK’s National Health Service was particularly bruised and patients had to be turned away from surgical procedures and appointments.

Iran has conducted cyber attacks against numerous targets in the US, Israel, UAE, and other countries. In turn, Iran was subjected to a cyber attack on April 7 that saw computer screens display the US flag with the warning “don’t mess with our elections”.

Prosecuting hackers is ineffective

The US government has launched prosecutions against hackers – most recently against nine Iranians for the cyber attacks on universities. However, prosecutions are of limited efficacy when hackers are beyond the reach of US law enforcement and unlikely to be surrendered by their home countries.

As I have written previously, countries such as Australia and the US cannot watch passively as rogue states conduct cyber attacks against targets within our jurisdiction.




Read more:
Is counter-attack justified against a state-sponsored cyber attack? It’s a legal grey area


Strong countermeasures must be taken in self defence against the perpetrators wherever they are located. If necessary, self defence must be preemptive – any potential perpetrators must be crippled before they are able to launch strikes on organisations here.

Reactive measures are a weak deterrent, and our response should include a first strike cyber attack option where there is credible intelligence about imminent attacks. Notably, the UK has threatened to use conventional military strikes against cyber attacks. This may be an overreaction at this time.

Educating the public is essential

Numerous cyber attacks in recent years – including the current attack – have targeted common household devices, such as routers. As a result, the security of public infrastructure relies to some extent on the security practices of everyday Australians.

So, what role should the government play in ensuring Australians are securing their devices?

Unfortunately, cybersecurity isn’t as simple as administering an annual flu shot. It’s not feasible for the government to issue cybersecurity software to residents since security patches are likely to be out-of-date before the next attack.

But the government should play a role in educating the public about cyber attacks and securing public internet services.

The city of New York has provided a free app to all residents called NYC Secure that is aimed at educating people. It is also adding another layer of security to its free wifi services to protect users from downloading malicious software or accessing phishing websites. And the city of Jonesboro, Georgia is putting up a firewall to secure its services.




Read more:
Artificial intelligence cyber attacks are coming – but what does that mean?


Australian city administrations must adopt similar strategies alongside a sustained public education effort. A vigilant public is a necessary component in our collective security strategy against cyber attacks.

This cannot be achieved without significant investment. In addition to education campaigns, private organisations – banks, universities, online sellers, large employers – must be leveraged into ensuring their constituents do not enable attacks through end-of-life devices, unsupported software, poor password protection policies and lack of encryption.

Governments must also prioritise investment in their own IT and human resources infrastructure. Public sector IT talent has always lagged the private sector due to pay imbalances, and other structural reasons.

It is difficult for governments to attain parity of technical capabilities with Russian or North Korean hackers in the short term. The only solution is a strong partnership – in research, detection tools, and counter-response strategies – with the private sector.

The ConversationThe Atlanta attack illustrates the perils of inaction – an audit report shows the city was warned months in advance but did nothing. Australian cities must not make the same mistake.

Sandeep Gopalan, Pro Vice-Chancellor (Academic Innovation) & Professor of Law, Deakin University

This article was originally published on The Conversation. Read the original article.

Telecommunications Ombudsman reports surge in complaints about services delivered over NBN


File 20180416 560 ppai8m.jpg?ixlib=rb 1.1
NBN Co chief executive Bill Morrow will present an upbeat account of the network’s impact in a speech on Tuesday.
AAP Image/Supplied by NBN Co

Michelle Grattan, University of Canberra

The government has strongly challenged the Telecommunications Industry Ombudsman (TIO) after its report showed complaints about services delivered over the NBN surged by 204% in the second half of 2017, compared with the same period a year earlier.

Communications Minister Mitch Fifield also announced details of a review, earlier flagged, of the telecommunications consumer protections framework, saying the high level of complaints about telecommunications services generally showed “the existing model for complaints handling and redress is not working”.

Fifield said the way the information regarding the 22,827 complaints about services delivered over the NBN was presented in the TIO report, released Tuesday, “could give the impression that responsibility for this figure rests with NBN Co”.

But advice to the government from NBN Co was that of these complaints, less than 5% were sent to NBN Co as complaints to resolve.

The NBN has been been heavily criticised for a slow rollout – although it says it has met every target for the past 14 quarters – low speeds and connection problems, generating high levels of complaints.

The six months to December saw a 39% increase in NBN premises activated.

The government and NBN Co are also focusing on the 16% fall in the rate of complaints about these services from the first to the second half of 2017.

In January to June of 2017, there were 19,683 complaints about services delivered over the NBN, making the picture better for the NBN when comparisons are made between the first and second halves of the year.

But the TIO report warns generally about comparisons of the two halves of the same year because of seasonal variations, preferring to compare the same period of each year. The government rejects the seasonal variation argument, saying the TIO itself has previously made comparisons within a year. It also believes the TIO is letting retailers off the hook.

The TIO is an industry-funded complaints resolution body. The NBN is not represented on its board.

The TIO report includes complaints for the six months to December covering mobile and fixed line telephony and both pre-NBN and NBN broadband.

It received nearly 85,000 complaints in total, which was a 28.7% rise over the same period in 2016. There was a 30.7% increase in complaints from residential consumers, and a 15.6% rise in those from small businesses.

Total complaints decreased from the 92,000 in the first half of 2017.

Fifield said that no matter who was the responsible party, the complaints figures were too high. “The current model for protecting consumers needs reform”.

The review, to provide for the post 2020 environment, will be undertaken in three parts to ensure consumers

… have access to an effective complaints handling and redress scheme;

… have reliable telecommunications services including reasonable timeframes for connections, fault repairs and appointments, as well as potential compensation or penalties against providers;

… are able to make informed choices and are treated fairly by their providers in service, contracts, billing, credit and debt management and switching providers.

Meanwhile chief executive of NBN Co Bill Morrow will present an upbeat account of the network’s impact in a speech at the National Press Club on Tuesday.

He will say the network generated an extra $1.2 billion in economic activity in 2017 and is encouraging more women to become their own bosses.

Morrow, who is leaving his job at the end of the year, will present figures prepared by the economic advisory firm AlphaBeta, using census data, modelling and polling to estimate the impact of the network – labelled “the nbn effect”.

He will say that “nbn-connected women are becoming self-employed at twice the overall rate of self-employment growth in nbn areas.

“In percentage terms, these results are stunning. The number of self-employed women in nbn regions grew at an average 2.3% every year, compared to just 0.1% annual average growth in female entrepreneurs in non-nbn areas.

“If this trend continues, up to 52,200 additional Australian women will be self-employed by the end of the rollout due to the ‘nbn effect’”, he will say.

The 2017 overall $1.2 billion estimated increase in economic activity – through new jobs, businesses and greater productivity – excludes the economic stimulus of the rollout itself.

“By the end of the rollout, this ‘nbn effect’ is predicted to have multiplied to $10.4 billion a year,” Morrow will say. “This represents an extra 0.07 percentage points to GDP growth, or 2.7% of the estimated GDP growth rate in 2021. By the end of the rollout, the ‘nbn effect’ is forecast to have helped create 31,000 additional jobs,” Morrow will say.

The ConversationThe network is now more than halfway built. About one in three homes and businesses are connected. The rollout is due to be completed by the end of 2020. Morrow has been CEO since 2014.

Michelle Grattan, Professorial Fellow, University of Canberra

This article was originally published on The Conversation. Read the original article.

Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Andrew Quodling, Queensland University of Technology

Facebook’s founder and chief executive Mark Zuckerberg faced two days of grilling before US politicians this week, following concerns over how his company deals with people’s data.

But the data Facebook has on people who are not signed up to the social media giant also came under scrutiny.

During Zuckerberg’s congressional testimony he claimed to be ignorant of what are known as “shadow profiles”.

Zuckerberg: I’m not — I’m not familiar with that.

That’s alarming, given that we have been discussing this element of Facebook’s non-user data collection for the past five years, ever since the practice was brought to light by researchers at Packet Storm Security.

Maybe it was just the phrase “shadow profiles” with which Zuckerberg was unfamiliar. It wasn’t clear, but others were not impressed by his answer.

//platform.twitter.com/widgets.js

Facebook’s proactive data-collection processes have been under scrutiny in previous years, especially as researchers and journalists have delved into the workings of Facebook’s “Download Your Information” and “People You May Know” tools to report on shadow profiles.

Shadow profiles

To explain shadow profiles simply, let’s imagine a simple social group of three people – Ashley, Blair and Carmen – who already know one another, and have each others’ email address and phone numbers in their phones.

If Ashley joins Facebook and uploads her phone contacts to Facebook’s servers, then Facebook can proactively suggest friends whom she might know, based on the information she uploaded.

For now, let’s imagine that Ashley is the first of her friends to join Facebook. The information she uploaded is used to create shadow profiles for both Blair and Carmen — so that if Blair or Carmen joins, they will be recommended Ashley as a friend.

Next, Blair joins Facebook, uploading his phone’s contacts too. Thanks to the shadow profile, he has a ready-made connection to Ashley in Facebook’s “People You May Know” feature.

At the same time, Facebook has learned more about Carmen’s social circle — in spite of the fact that Carmen has never used Facebook, and therefore has never agreed to its policies for data collection.

Despite the scary-sounding name, I don’t think there is necessarily any malice or ill will in Facebook’s creation and use of shadow profiles.

It seems like a earnestly designed feature in service of Facebooks’s goal of connecting people. It’s a goal that clearly also aligns with Facebook’s financial incentives for growth and garnering advertising attention.

But the practice brings to light some thorny issues around consent, data collection, and personally identifiable information.

What data?

Some of the questions Zuckerberg faced this week highlighted issues relating to the data that Facebook collects from users, and the consent and permissions that users give (or are unaware they give).

Facebook is often quite deliberate in its characterisations of “your data”, rejecting the notion that it “owns” user data.

That said, there are a lot of data on Facebook, and what exactly is “yours” or just simply “data related to you” isn’t always clear. “Your data” notionally includes your posts, photos, videos, comments, content, and so on. It’s anything that could be considered as copyright-able work or intellectual property (IP).

What’s less clear is the state of your rights relating to data that is “about you”, rather than supplied by you. This is data that is created by your presence or your social proximity to Facebook.

Examples of data “about you” might include your browsing history and data gleaned from cookies, tracking pixels, and the like button widget, as well as social graph data supplied whenever Facebook users supply the platform with access to their phone or email contact lists.

Like most internet platforms, Facebook rejects any claim to ownership of the IP that users post. To avoid falling foul of copyright issues in the provision of its services, Facebook demands (as part of its user agreements and Statement of Rights and Responsibilites) a:

…non-exclusive, transferable, sub-licensable, royalty-free, worldwide license to use any IP content that you post on or in connection with Facebook (IP License). This IP License ends when you delete your IP content or your account unless your content has been shared with others, and they have not deleted it.

Data scares

If you’re on Facebook then you’ve probably seen a post that keeps making the rounds every few years, saying:

In response to the new Facebook guidelines I hereby declare that my copyright is attached to all of my personal details…

Part of the reason we keep seeing data scares like this is that Facebook’s lacklustre messaging around user rights and data policies have contributed to confusion, uncertainty and doubt among its users.




Read more:
How to stop haemorrhaging data on Facebook


It was a point that Republican Senator John Kennedy raised with Zuckerberg this week (see video).

Senator John Kennedy’s exclamation is a strong, but fair assessment of the failings of Facebook’s policy messaging.

After the grilling

Zuckerberg and Facebook should learn from this congressional grilling that they have struggled and occasionally failed in their responsibilities to users.

It’s important that Facebook now makes efforts to communicate more strongly with users about their rights and responsibilities on the platform, as well as the responsibilities that Facebook owes them.

This should go beyond a mere awareness-style PR campaign. It should seek to truly inform and educate Facebook’s users, and people who are not on Facebook, about their data, their rights, and how they can meaningfully safeguard their personal data and privacy.




Read more:
Would regulation cement Facebook’s market power? It’s unlikely


Given the magnitude of Facebook as an internet platform, and its importance to users across the world, the spectre of regulation will continue to raise its head.

The ConversationIdeally, the company should look to broaden its governance horizons, by seeking to truly engage in consultation and reform with Facebook’s stakeholders – its users — as well as the civil society groups and regulatory bodies that seek to empower users in these spaces.

Andrew Quodling, PhD candidate researching governance of social media platforms, Queensland University of Technology

This article was originally published on The Conversation. Read the original article.