Here’s what a privacy policy that’s easy to understand could look like



File 20180606 137315 1d8kz0n.jpg?ixlib=rb 1.1
We need a simple system for categorising data privacy settings, similar to the way Creative Commons specifies how work can be legally shared.
Shutterstock

Alexander Krumpholz, CSIRO and Raj Gaire, CSIRO

Data privacy awareness has recently gained momentum, thanks in part to the Cambridge Analytica data breach and the introduction of the European Union’s General Data Protection Regulation (GDPR).

One of the key elements of the GDPR is that it requires companies to simplify their privacy related terms and conditions (T&Cs) so that they are understandable to the general public. As a result, companies have been rapidly updating their terms and conditions (T&Cs), and notifying their existing users.




Read more:
Why your app is updating its privacy settings and how this will affect businesses


On one hand, these new T&Cs are now simplified legal documents. On the other hand, they are still too long. Unfortunately, most of us have still skipped reading those documents and simply clicked “accept”.

Wouldn’t it be nice if we could specify our general privacy preferences in our devices, have them check privacy policies when we sign up for apps, and warn us if the agreements overstep?

This dream is achievable.

Creative Commons as a template

For decades, software was sold or licensed with Licence Agreements that were several pages long, written by lawyers and hard to understand. Later, software came with standardised licences, such as the GNU General Public Licence, Berkeley Software Distribution, or The Apache License. Those licences define users’ rights in different use cases and protect the provider from liabilities.

However, they were still hard to understand.

With the foundation of Creative Commons (CC) in 2001, a simplified licence was developed that reduced complex legal copyright agreements to a small set of copyright classes.

These licences are represented by small icons and short acronyms, and can be used for images, music, text and software. This helps creative users to immediately recognise how – or whether – they can use the licensed content in their own work.




Read more:
Explainer: Creative Commons


Imagine you have taken a photo and want to share it with others for non-commercial purposes only, such as to illustrate a story on a not-for-profit news website. You could licence your photo as CC BY-NC when uploading it to Flickr. In Creative Commons terms, the abbreviation BY (for attribution) requires the user to cite the owner and NC (non-commercial) restricts the use to non-commercial applications.

Internet search engines will index these attributes with the files. So, if I search for photos explicitly licensed with those restrictions, via Google for example, I will find your photo. This is possible because even the computers can understand these licences.

We need to develop Privacy Commons

Similar to Creative Commons licences under which creative content is given to others, we need Privacy Commons by which companies can inform users how they will use their data.

The Privacy Commons need to be legally binding, simple for people to understand and simple for computers to understand. Here are our suggestions for what a Privacy Commons might look like.

We propose that the Privacy Commons classifications cover at least three dimensions of private data: collection, protection, and spread.

What data is being collected?

This dimension is to specify what level of personal information is collected from the user, and is therefore at risk. For example, name, email, phone number, address, date of birth, biometrics (including photos), relationships, networks, personal preferences, and political opinions. The could be categorised at different levels of sensitivities.

How is your data protected?

This dimension specifies:

  • where your data stored – within an app, in one server, or in servers at multiple locations
  • how it is stored and transported – whether it is plain text or encrypted
  • how long the data is kept for – days, months, years or permanently
  • how the access to your data controlled within the organisation – this indicates the protection of your data against potentially malicious actors like hackers.

How is your data spread?

In other words, who is your data shared with? This dimension tells you whether or not the data is shared with third parties. If the data is shared, will it be de-identified appropriately? Is it shared for research purposes, or sold for commercial purposes? Are there any further controls in place after the data is shared? Will it be deleted by the third party when the user deletes it at the primary organisation?




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


Privacy Commons will help companies think about user privacy before offering services. It will also help solve the problem of communication about privacy in the same way that Creative Commons is solving the problems of licensing for humans and computers. Similar ideas have been discussed in the past, such as Mozilla. We need to revisit those thoughts in the contemporary context of the GDPR.

Such a system would allow you to specify Privacy Commons settings in the configuration of your children’s devices, so that only appropriate apps can be installed. Privacy Commons could also be applied to inform you about the use of your data gathered for other purposes like loyalty rewards cards, such as FlyBuys.

Of course, Privacy Commons will not solve everything.

For example, it will still be a challenge to address concerns about third party personal data brokers like Acxiom or Oracle collecting, linking and selling our data without most of us even knowing.

The ConversationBut at least it will be a step in the right direction.

Alexander Krumpholz, Senior Experimental Scientist, CSIRO and Raj Gaire, Senior Experimental Scientist, CSIRO

This article was originally published on The Conversation. Read the original article.

Advertisements

94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour



File 20180514 34038 10eli61.jpg?ixlib=rb 1.1
It would take the average person 244 hours per year (6 working weeks) to read all privacy policies that apply to them.
Shutterstock

Katharine Kemp, UNSW

Australians are agreeing to privacy policies they are not comfortable with and would like companies only to collect data that is essential for the delivery of their service. That’s according to new, nation-wide research on consumer attitudes to privacy policies released by the Consumer Policy Research Centre (CPRC) today.

These findings are particularly important since the government’s announcement last week that it plans to implement “open banking” (which gives consumers better access to and control over their banking data) as the first stage of the proposed “consumer data right” from July 2019.




Read more:
How not to agree to clean public toilets when you accept any online terms and conditions


Consumer advocates argue that existing privacy regulation in Australia needs to be strengthened before this new regime is implemented. In many cases, they say, consumers are not truly providing their “informed consent” to current uses of their personal information.

While some blame consumers for failing to read privacy policies, I argue that not reading is often rational behaviour under the current consent model. We need improved standards for consent under our Privacy Act as a first step in improving data protection.

Australians are not reading privacy policies

Under the Privacy Act, in many cases, the collection, use or disclosure of personal information is justified by the individual’s consent. This is consistent with the “notice and choice” model for privacy regulation: we receive notice of the proposed treatment of our information and we have a choice about whether to accept.

But according to the CPRC Report, most Australians (94%) do not read all privacy policies that apply to them. While some suggest this is because we don’t care about our privacy, there are four good reasons why people who do care about their privacy don’t read all privacy policies.

https://datawrapper.dwcdn.net/hJXfh/1/

We don’t have enough time

There are many privacy policies that apply to each of us and most are lengthy. But could we read them all if we cared enough?

According to international research, it would take the average person 244 hours per year (six working weeks) to read all privacy policies that apply to them, not including the time it would take to check websites for changes to these policies. This would be an impossible task for most working adults.

Under our current law, if you don’t have time to read the thousands of words in the policy, your consent can be implied by your continued use of the website which provides a link to that policy.

We can’t understand them

According to the CPRC, one of the reasons users typically do not read policies is that they are difficult to comprehend.

Very often these policies lead with feel-good assurances “We care about your privacy”, and leave more concerning matters to be discovered later in vague, open-ended terms, such as:

…we may collect your personal information for research, marketing, for efficiency purposes…

In fact, the CPRC Report states around one in five Australians:

…wrongly believed that if a company had a Privacy Policy, it meant they would not share information with other websites or companies.




Read more:
Consent and ethics in Facebook’s emotional manipulation study


We can’t negotiate for better terms

We generally have no ability to negotiate about how much of our data the company will collect, and how it will use and disclose it.

According to the CPRC Report, most Australians want companies only to collect data that is essential for the delivery of their service (91%) and want options to opt out of data collection (95%).

However, our law allows companies to group into one consent various types and uses of our data. Some are essential to providing the service, such as your name and address for delivery, and some are not, such as disclosing your details to “business partners” for marketing research.

These terms are often presented in standard form, on a take-it-or-leave-it basis. You either consent to everything or refrain from using the service.

https://datawrapper.dwcdn.net/L7fPF/2/

We can’t avoid the service altogether

According to the CPRC, over two thirds of Australians say they have agreed to privacy terms with which they are not comfortable, most often because it is the only way to access the product or service in question.

In a 2017 report, the Productivity Commission expressed the view that:

… even in sectors where there are dominant firms, such as social media, consumers can choose whether or not to use the class of product or service at all, without adversely affecting their quality of life.

However, in many cases, we cannot simply walk away if we don’t like the privacy terms.

Schools, for example, may decide what apps parents must use to communicate about their children. Many jobs require people to have Facebook or other social media accounts. Lack of transparency and competition in privacy terms also means there is often little to choose between rival providers.

We need higher standards for consent

There is frequently no real notice and no real choice in how our personal data is used by companies.

The EU General Data Protection Regulation (GDPR), which comes into effect on 25 May 2018, provides one model for improved consent. Under the GDPR, consent:

… should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject’s agreement.




Read more:
You may be sick of worrying about online privacy, but ‘surveillance apathy’ is also a problem


The Privacy Act should be amended along these lines to set higher standards for consent, including that consent should be:

  • explicit and require action on the part of the customer – consent should not be implied by the mere use of a website or service and there should be no pre-ticked boxes. Privacy should be the default;

  • unbundled – individuals should be able to choose to consent only to the collection and use of data essential to the delivery of the service, with separate choices of whether to consent to additional collections and uses;

  • revocable – the individual should have the option to withdraw their consent in respect of future uses of their personal data at any time.

The ConversationWhile further improvements are needed, upgrading our standards for consent would be an important first step.

Katharine Kemp, Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article was originally published on The Conversation. Read the original article.

Tech giants are battling it out to supply the global internet – here’s why that’s a problem


Claudio Bozzi, Deakin University

The US Federal Communications Commission last month granted Elon Musk’s SpaceX permission to launch 4,425 satellites that will provide affordable high speed broadband internet to consumers.

The Starlink network will be accessible in the US and around the world – including in areas where the internet is currently unavailable or unreliable.

SpaceX isn’t the only company investing in global internet infrastructure. Facebook, Google and Microsoft all have various projects underway to deliver high speed connectivity to remote and rural areas.

It’s all part of a trend of private companies attempting to breach the digital divide and wage a battle for the global internet.




Read more:
Connecting everyone to the internet won’t solve the world’s development problems


But entrusting market forces to build critical internet resources and infrastructure is problematic. These companies aren’t obligated to operate in the interest of consumers. In some cases their practices could serve to further entrench the existing digital divide.

Half the world’s population can’t access the internet

The internet is embedded in social, personal and economic life across the developed world.

But access varies significantly between industrialised nations that boast high per capita incomes, and developing nations with largely poor, rural populations.

For example, 94% of South Korean adults and 93% of Australian adults have access to the internet, compared with just 22% of Indians and 15% of Pakistanis.

https://datawrapper.dwcdn.net/1U1a3/1/

As society becomes increasingly dependent on the internet, nations and communities need equal access. Otherwise legacy inequalities will become further entrenched and new divides will emerge, potentially creating a “permanent underclass”.

Tech giants battle it out

The tech giants have been investing heavily in critical infrastructure in recent years.

Google owns the FASTER trans-Pacific undersea cable link, which has carried data (at 60 terabits per second) between the US, Japan and Taiwan since 2016. Meanwhile, the Microsoft and Facebook funded MAREA trans-Atlantic cable has connected the US to southern Europe (at 160 terabits per second) since in 2017.

New investments centre on atmospheric, stratospheric and satellite delivery strategies.

Along with SpaceX’s constellation of small satellites, Facebook’s internet.org uses atmospheric drones to deliver internet to rural and remote areas. Google’s Project Loon uses high altitude navigable balloons for the same purpose.

//platform.twitter.com/widgets.js

The privatisation of a public good is problematic

Private investors who build infrastructure are driven by commercial imperatives rather than a need to deliver social benefits. And that dynamic can entrench and exacerbate existing – and create new – digital, social and economic divides.

This can be innocuous enough, such as when the company that makes League of Legends built its own internet network to ensure its players weren’t upset by slow speeds.

But it’s more of a problem when faster connections can tilt investment and trading playing fields in favour of those with access, leaving ordinary investors out in the cold.




Read more:
How the internet is failing to drive economic development where promised


Facebook’s Free Basics is a program that aims to provide cheap internet services to consumers in developing countries. It currently operates in 63 developing nations.

Critics say the service is a blatant a strategy to extend Facebook’s global dominance to the developing world. It’s also been accused of violating net neutrality by strictly controlling participating sites to eliminate Facebook’s competitors.

Technology is not neutral

Privately owned and operated internet infrastructure can also become a means of social control.

Termination of internet services is a notorious tactic used by authoritarian regimes to repress dissent by disrupting communication and censoring information. But private entities may also exercise control over infrastructure outside of government regulation.

For example, when WikiLeaks published government correspondence in 2010, Amazon and AnyDNS withdrew the services that maintained the Wikileaks website. Mastercard, Paypal and VISA terminated services through which the organisation received funding for its activities.

These companies were not acting under government direction, citing violations of their Acceptable Use policies to justify their decisions. Harvard professor Yochai Benckler said at the time:

Commercial owners of the critical infrastructures of the networked environment can deny service to controversial speakers, and some appear to be willing to do so at a mere whiff of public controversy.

SpaceX must meet a host of technical conditions before Starlink can be activated. But we shouldn’t assume that providing internet access to developing countries will lead to an ecosystem from which economic or social benefits will flow.

The ConversationWhen the logic of corporate capitalism dominates the provision of internet services, there’s no guarantee that the internet’s founding principles – an egalitarian tool where users share information for the greater good – will be upheld.

Claudio Bozzi, Lecturer in Law, Deakin University

This article was originally published on The Conversation. Read the original article.

Australia’s digital divide is not going away



File 20180328 109179 1ybttiu.jpg?ixlib=rb 1.1
People in remote areas use the internet much less for entertainment and formal education compared to their urban counterparts.
Mai Lam/The Conversation NY-BD-CC, CC BY-SA

Julian Thomas, RMIT University; Chris K Wilson, RMIT University, and Sora Park, University of Canberra

Despite large investments in the National Broadband Network, the “digital divide” in Australia remains largely unchanged, according to a new report from the Australian Bureau of Statistics.

The Australian Household Use of Information Technology report says we are doing more online, and we are using an increasing number of connected devices. Our homes are more connected.

However, the number of people using the internet is not growing, and the basic parameters of digital inequality in Australia – age, geography, education and income – continue to define access to and uses of online resources.

https://datawrapper.dwcdn.net/gzaV3/1/

Almost 2.6 million Australians, according to these ABS figures, do not use the internet. Nearly 1.3 million households are not connected. So what is going on? The ABS data points to the complexity of the social and economic issues involved, but it also helps us identify the key areas of concern.

Who’s missing out

Age is a critical factor. While more than nine in ten people aged between 15 to 54 are internet users, the number drops to eight in ten of those aged 55-64 years, and to under six in ten of those over 65 years.

Most people with jobs (95.1%) are online, compared to just 72.5% of those not employed. Migrants from non-English speaking countries are less connected (81.6%) than those Australian born (87.6%). Those already at a disadvantage – the very people who have the most to gain from all the extraordinary resources of the internet – are missing out.




Read more:
Three charts on: the NBN and Australia’s digital divide


This is not to say that it is only individuals that will benefit from greater digital inclusion. Raising the level of digital inclusion yields direct benefits for the community, government and business. There are, for instance, clear efficiency gains for government moving services online.

Raising the level of online health engagement for those over 65 years of age (the heaviest users of health care) would provide such a benefit. Currently, just over one in five people in this age category access online health services, substantially below the national average of two in five.

https://datawrapper.dwcdn.net/HJpJy/2/

But nor should we focus only on the economic and efficiency gains of inclusion: the social benefits of connection and access to entertainment and information are considerable for most internet users, and especially so for those who are isolated and lonely, as older people may be.

Income and affordability matter

Australians with higher incomes are substantially more likely to have internet access at home than those with lower incomes – 96.9% of the highest quintile (bracket representing one fifth of the sample) income households have access, whereas only 67.4% of the lowest quintile have access.

And better-off Australians appear to be doing more online. Compared to the general population their uses of online banking and shopping, education and health services are higher. They are connected to the internet with multiple devices, with an average of 7.2 devices at home, compared to 4.4 in the lowest income quintile.

The gap between the major cities and the bush has not narrowed over time – 87.9% of those living in major cities have internet access at home, 82.7% in inner regional, 80.7% in outer regional and 77.1% in remote areas. It’s important to note that this survey did not include remote Indigenous communities, where the evidence suggests that internet access is usually very poor.

Among those who are connected, geographical differences in the means of access and modes of engagement with online services suggest a further gap among those who are already disadvantaged. People in remote areas use the internet much less for entertainment and formal education compared to their urban counterparts, which are services that require more bandwidth and better quality connections.

https://datawrapper.dwcdn.net/rL8W2/4/

Unfortunately, the ABS did not ask why households do not have home internet access, as it did in 2014-15. That data revealed cost was a factor keeping 198,600 households offline. Unsurprisingly, 148,200 of these households were from the two lowest income quintiles. Cost was the major factor in keeping more than 30,000 of the 76,000 family households (with children under 15) offline.

Given the increasingly central role of the internet in educational activities, the fact that the number of family households without access has not fallen since 2014-15 is concerning.

Affordability will continue to be a problem as more data-intensive services are offered online and the demand for data increases, and as mobile services become increasingly important.




Read more:
Bridging the digital divide means accommodating diversity


However, cost was not the only reason people gave for non-use. Around 200,000 of the two lowest income households lacked knowledge or confidence to use the internet. Digital ability, and our readiness to make use of the internet, are clearly areas for continuing attention. We know that interventions there can make a difference.

The final survey on household use of IT

This ABS survey is the last of its kind. We hope the Bureau will be able to undertake further surveys in this area. The end of this data series does not signal its lack of relevance, at a time when digital inclusion is more important than ever. On the contrary, it points to a pressing new challenge for governments, the community, and business.

As our service economy increasingly moves online — in education, health, work, and government services — we need to ensure that all Australians, particularly those already disadvantaged, have affordable access to the online world. A reliable evidence base to inform our work in this area is essential.

But the information we have should be enough to spark action in some critical areas. The affordability of broadband is clearly one of these. When we consider, for example, the situation of families with children — where cost is clearly an issue for a significant number of them — we need to recognise that existing policy settings and market mechanisms are not working.

The digital divide is likely to grow

The ABS findings correspond to other recent work in the area. Australian policy has long had the aim of making communications widely accessible across our huge country and dispersed, fairly small population.

But the Australian Digital Inclusion Index has highlighted the problem of affordability and unequal access across economic, social and spatial lines. Australia’s performance also compares poorly to other countries.

The Inclusive Internet Index, produced by The Economist’s Intelligence Unit, rates Australia at 25 out of 86 countries, behind Russia and Hungary.




Read more:
Inequality is not inevitable, it’s a policy choice, says Oxfam


So despite the egalitarian aspirations embodied in the policy language of the National Broadband Network, the evidence suggests that the Australian internet remains unusually unequal in terms of access and affordability.

The ConversationInstead of a digital economy designed for everyone, we appear to have created a highly stratified internet, where the distribution of resources and opportunities online reflects Australia’s larger social and economic inequalities. The risk is that over time the digital divide will amplify these. Unfortunately there is little indication in the ABS data that any of the key indicators will change soon.

Julian Thomas, Director, Social Change Enabling Capability Platform, RMIT University; Chris K Wilson, Research Fellow, Technology, Communication and Policy Lab – Digital Ethnography Research Centre, RMIT University, and Sora Park, Director, News & Media Research Centre, University of Canberra

This article was originally published on The Conversation. Read the original article.

Tech diplomacy: cities drive a new era of digital policy and innovation



File 20180116 53292 13qk37z.jpg?ixlib=rb 1.1
Cities will be driving globalisation and innovation in the emerging world order.
28 November Studio/Shutterstock

Hussein Dia, Swinburne University of Technology

France recently appointed a tech ambassador to the Silicon Valley. French President Emmanuel Macron named David Martinon as “ambassador for digital affairs”, with jurisdiction over the digital issues that the foreign affairs ministry deals with. This includes digital governance, international negotiations and support for digital companies’ export operations.

The appointment is part of France’s international digital strategy, which is becoming a focus of its foreign policy. And France isn’t alone in doing this.

In early 2017, Denmark appointed a “TechPlomacy” ambassador to the tech industry. Casper Klynge is possibly the first-ever envoy to be dispatched to Silicon Valley with a clear mandate to build better relationships with major technology firms.




Read more:
Cities in the Future of Democracy


In an interview with Danish newspaper Politiken, Foreign Minister Anders Samuelsen said:

Big companies affect Denmark just as much as entire countries.

He isn’t wrong. According to geopolitical strategist Parag Khanna, the world’s top tech companies are achieving more international influence and economic power than dozens of nations put together. In 2016, the cash that Apple had on hand exceeded the gross domestic product (GDP) of two-thirds of the world’s countries.

Some of these global players are also influential policy actors in their own right. In 2016, Foreign Policy presented its Diplomat of the Year Award to Eric Schmidt, executive chairman of Google parent company Alphabet Inc. The award was in recognition of Google’s contributions to international relations through empowering citizens globally.

What’s different about TechPlomacy?

The recent ambassadorial appointments signify not only the important socio-economic and political roles of technology, but also how diplomacy is evolving and adapting to the disruptive changes in our societies.

These developments mark the prominence of tech-cities on the global scene. Nation states are no longer the only players in international affairs; cities are also taking centre stage.

As opposed to lobbying governments in the world’s capitals, the new breed of diplomats will target tech-cities with multi-trillion-dollar technology sectors. They will also rub shoulders and nurture a direct dialogue with organisations that have gigantic economic impacts. In 2016, for example, Google helped to inject US$222 billion in economic activity in the US alone.

The so-called “Google ambassadors” won’t be targeting Silicon Valley only. The Office of Denmark’s Tech Ambassador has a team with physical presence across three time zones in North America, Europe and Asia. It will also connect with tech hubs around the world.

As part of an interconnected planet, these tech hubs will increasingly play a more active role in the global economy. Decision-makers are starting to recognise the imperative to establish good relationships and understand the tech giants’ policies and agendas.




Read more:
Creative city, smart city … whose city is it?


Cities as autonomous diplomatic units

The rise of cities as “autonomous diplomatic units” may be a defining feature of the 21st century.

Already, just 100 cities account for 30% of the world’s economy and almost all its innovation. New York and London, together, represent 40% of global market capitalisation. According to the McKinsey Global Institute, the top 600 cities generate 60% of global GDP and are projected to be home to 25% of the world’s population by 2025.

McKinsey expects that 136 new cities will make it into the top 600 by 2025. All these new cities are from the developing world – 100 of them from China alone.

These global cities appear likely to dominate the 21st century. They will become magnets for economic activity and engines of globalisation. Khanna argues:

… [C]ities rather than states or nations are becoming the islands of governance on which the future world order will be built.

He also suggests that connectivity through an expanding matrix of infrastructure (64 million kilometres of roads, 4 million kilometres of railways and 1 million kilometres of internet cables) will far outweigh the importance of 500,000 kilometres of international borders.

Still more questions than answers

As more cities assert their leadership on the world stage, new mechanisms and networks (e.g. C40 Cities) could emerge. That could signal a new generation of diplomacy that relates and engages with cities rather than bilateral collaboration between nations.




Read more:
This is why we cannot rely on cities alone to tackle climate change


The C40 Cities Climate Leadership Group connects more than 90 of the world’s major cities.
The Independent UK

Although these new diplomatic outposts have generated some profound interest, questions remain.

Will this era of tech diplomacy create collaborative ways to develop and achieve foreign policy priorities? Will it increasingly become a unifying global priority?

Do these appointments signify a transformation in international relationships? Will big tech companies also develop diplomatic capacities?

And will we witness the emergence of a post-national ideology of civic-ism, whereby people’s loyalty to the city surpasses that to a nation?

What comes next?

Not everyone will be excited by these appointments. Many would downplay their significance. Others would argue that tech companies have been engaged globally for years, and that they do this anyway as part of their “business as usual” activities.

Whether you embrace or object to it, a new world order is emerging around cities and their economies, rather than nations and their borders. These cities may ultimately chart pathways to their own sovereign diplomacy and formulate their own codes of conduct.

The ConversationIt is anyone’s guess whether the future will bear any resemblance to TechPlomacy or something else we haven’t yet imagined. The significance of these appointments will become clearer as the envoys go to work and we begin to understand the possibilities.

Hussein Dia, Associate Professor, Swinburne University of Technology

This article was originally published on The Conversation. Read the original article.

Why we are still convinced robots will take our jobs despite the evidence


Jeff Borland, University of Melbourne

The tale of new technologies causing the death of work is the prophecy that keeps on giving. Despite evidence to the contrary, we still view technological change today as being more rapid and dramatic in its consequences than ever before.

The mistaken view that robots will take our jobs may come from a human bias to believe that “we live in special times”. An absence of knowledge of history, the greater intensity of feeling about events which we experience first-hand, and perhaps a desire to attribute significance to the times in which we live, all contribute to this bias.

History repeating

In the 1930s, John Maynard Keynes envisaged that innovations such as electricity would produce a world where people spent most of their time on leisure activities. In the United States in the 1960s, Lyndon Johnson established a Presidential Commission to investigate fears that automation was permanently reducing the amount of work available.

Australia has not escaped the prophecy, with similar concerns about the future of work expressed in the 1970s.

In their history of Monash University, Graeme Davison and Kate Murphy report that:

In 1978, the historian Ian Turner, organised a symposium on the implications of the new technologies. The world, he predicted, was about to enter a period as significant as the Neolithic or Industrial revolutions. By 1988, at least a quarter of the Australian workforce would be made redundant by technological change…

Some years later, Barry Jones continued the gloomy forecasts in his best-seller Sleepers Wake!:

In the 1980s, new technologies can decimate the labour force in the goods producing sectors of the economy…

Of course, none of this came to pass in Australia; just as work did not disappear in the 1930s in the United Kingdom, or the 1960s in the United States.

Yet today, we are seeing the resurrection of the prophecy. Commentary on the Australian labour market abounds with claims that the world of work is undergoing radical and unprecedented change.

The increased application of computer-based technologies in the workplace is suggested to be causing a reduction in the total amount of work available; or to be bringing a more rapid pace of substitution of machines for humans than has been seen previously.

No evidence for the death of work

In recent research with Michael Coelli, we argue that the prophecy is no more likely to be realised in the 2010s in Australia than in the 1970s.

Certainly, there is no evidence that the death of work is at present underway. Since the mid-1960s the aggregate hours worked by the Australian population (on a per capita basis) has remained stable.

In particular, there has been no long-run decline in the aggregate amount of work that matches the timing of the progressive introduction of computers to the workplace since the early 1980s.


https://datawrapper.dwcdn.net/Oyytg/1/

Source: Borland, J. and M. Coelli (2017), ‘Are robots taking our jobs?’, Australian Economic Review, forthcoming, Figure 3


Moreover, the pace at which workers are churning between jobs in the Australian labour market is not getting quicker. Not only is there no evidence that more workers are being forced to work in short duration jobs, but what is apparent is that the opposite has happened. The proportion of workers in very long duration jobs has increased over the past three decades.


https://datawrapper.dwcdn.net/BnUxP/1/

Source: Borland, J. and M. Coelli (2017), ‘Are robots taking our jobs?’, Australian Economic Review, forthcoming, Figure 9


Why work is not disappearing

There are good reasons why we should not expect new technologies to cause the death of work. New technologies always cause job losses, but that is only part of the story. What also needs to be understood is how they increase the amount of work available.

One way this happens is through the increases in incomes that accompany the application of new technologies. With the introduction of these technologies, it may take less labour time to produce what used to be consumed, but higher real incomes, together with an apparently unlimited human desire to spend, bring extra demand (for existing products as well as for new types of goods and services), and hence for workers to provide those extra goods and services.

As well, new technologies are likely to substitute for some types of workers, but to be complementary to, and hence increase demand for, other types of workers. Computer-based technologies appear to be complementary to workers who perform non-routine cognitive jobs.

In a report on the digitally enabled workforce, Stefan Hajkowicz and co-authors suggest a range of examples for Australia – such as an increase in demand for photographers at the same time as demand for photographic developers and printers has decreased; an increase in demand for graphic designers versus a decrease in demand for printers and graphic press workers; and a decrease in demand for bank tellers simultaneously with an increase in demand for finance professionals.

The ConversationThe end of work is no closer in Australia today than at any time in the past. So, perhaps there is a need to keep disproving the prophecy, to change our mindset.

Jeff Borland, Professor of Economics, University of Melbourne

This article was originally published on The Conversation. Read the original article.

5G will be a convenient but expensive alternative to the NBN


Rod Tucker, University of Melbourne

Will Australia’s National Broadband Network (NBN) face damaging competition from the upcoming 5G network? NBN Co CEO Bill Morrow thinks so.

This week, he even floated the idea of a levy on mobile broadband services, although Prime Minister Malcolm Turnbull quickly rejected the idea.

NBN Co is clearly going to have to compete with mobile broadband on an equal footing.


Read More: Like it or not, you’re getting the NBN, so what are your rights when buying internet services?


This latest episode in the NBN saga raises the question of exactly what 5G will offer broadband customers, and how it will sit alongside the fixed NBN network.

To understand how 5G could compare with the NBN, let’s examine the key differences and similarities between mobile networks and fixed-line broadband.

What is 5G?

5G stands for “5th generation mobile”. It builds upon today’s 4G mobile network technology, but promises to offer higher peak connection speeds and lower latency, or time delays.

5G’s higher connection speeds will be possible thanks to improved radio technologies, increased allocations of radio spectrum, and by using many more antenna sites or base stations than today’s networks. Each antenna will serve a smaller area, or cell.

The technical details of 5G are currently under negotiation in international standards bodies. 5G networks should be available in Australia by 2020, although regulatory changes are still needed.

Connections on 5G

In a mobile network, the user’s device (typically a smart phone) communicates with a nearby wireless base station via a radio link. All users connected to that base station share its available data capacity.

Australia’s mobile network typically provides download speeds of around 20 Mb/s. But the actual speed of connection for an individual decreases as the number of users increases. This effect is known as contention.

Anyone who has tried to upload a photo to Facebook from the Melbourne Cricket Ground will have experienced this.

Mobile base stations.
kongsky/Shutterstock

The maximum download speed of 5G networks could be more than 1 Gb/s. But in practice, it will likely provide download speeds around 100 Mb/s or higher.

Because of contention and the high cost of the infrastructure, mobile network operators also impose significant data download limits for 4G. It is not yet clear what level of data caps will apply in 5G networks.

Connections on the NBN

In a fixed-line network like the NBN, the user typically connects to the local telephone exchange via optical fibre. Directly, in the case of fibre-to-the-premises (FTTP), or by copper wiring and then fibre, in fibre-to-the-node (FTTN).

An important difference between the NBN and a mobile network is that on the NBN, there is virtually no contention on the data path between the user and the telephone exchange. In other words, the user’s experience is almost independent of how many other users are online.

But, as highlighted in the recent public debate around the NBN, some users have complained that NBN speeds decrease at peak usage times.

Importantly, this is not a fundamental issue of the NBN technology. Rather, it is caused by artificial throttling thanks to the NBN Co’s Connectivity Virtual Circuit (CVC) charges, and/or by contention in the retail service provider’s network.

Retail service providers like TPG pay CVC charges to NBN Co to gain bandwidth into the NBN. These charges are currently quite high, and this has allegedly encouraged some service providers to skimp on bandwidth, leading to contention.

A restructuring of the wholesale model as well as providing adequate bandwidth in NBN Co’s transit network could easily eliminate artificial throttling.

The amount of data allowed by retailers per month is also generally much higher on the NBN than in mobile networks. It is often unlimited.

This will always be a key difference between the NBN and 5G.

Don’t forget, 5G needs backhaul

In wireless networks, the connection between the base stations and internet is known as backhaul.

Today’s 4G networks often use microwave links for backhaul, but in 5G networks where the quantity of data to be transferred will be higher, the backhaul will necessarily be optical fibre.

In the US and elsewhere, a number of broadband service providers are planning to build 5G backhaul networks using passive optical network (PON) technology. This is the type used in the NBN’s FTTP sections.

In fact, this could be a new revenue opportunity for NBN Co. It could encourage the company to move back to FTTP in certain high-population density areas where large numbers of small-cell 5G base stations are required.

So, will 5G Compete with the NBN?

There is a great deal of excitement about the opportunities 5G will provide. But its full capacity will only be achieved through very large investments in infrastructure.

Like today’s 4G network, large data downloads for video streaming and other bandwidth-hungry applications will likely be more expensive using 5G than using the NBN.


Read More: The NBN needs subsidies if we all want to benefit from it


In addition, future upgrades to the FTTP sections of the NBN will accommodate download speeds as high as 10 Gb/s, which will not be achievable with 5G.

Unfortunately, those customers served by FTTN will not enjoy these higher speeds because of the limitations of the copper connections between the node and the premises.

The Conversation5G will provide convenient broadband access for some internet users. But as the demand for ultra-high-definition video streaming and new applications such as virtual reality grow, the NBN will remain the network of choice for most customers, especially those with FTTP services.

Rod Tucker, Laureate Emeritus Professor, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Explainer: how to extend your phone’s battery life



File 20170727 25725 1rb3q0z
Without proper care, mobile phone batteries can degrade and hold less charge.
Arthur Mustafa/Shutterstock

Jacek Jasieniak, Monash University

As mobile phone users, all we want is enough battery life to last the day. Frustratingly, the older the device, the less power it seems to have.

In fact, the amount of battery life our mobiles have on any given day depends on two key factors: how we use them on that particular day, and how we used them in the past.

Mobile phones use lithium-ion batteries for energy storage. In this type of battery, lithium metal and lithium ions move in and out of individual electrodes, causing them to physically expand and contract.


Read more: Do you know where your batteries come from?


Unfortunately, these processes are not completely reversible and the batteries lose their charge capacity and voltage as the number of charge and discharge cycles grows.

To make matters worse, the electrolyte (electrically conductive liquid) that connects the electrodes also degrades throughout these cycles.

The ability of lithium-ion batteries to store charge depends on the extent of their degradation. This means there is a link between how we handle our devices today and the charge capacity available in the future.

Through a few simple steps, users can minimise this degradation and extend their device’s life.

Lithium ion batteries are the main battery type in mobile phones.
Andy Melton/Flickr, CC BY-SA

Strategies for extending battery capacity

Control battery discharge

Typical lithium-ion batteries for mobile phones are supposed to retain 80% of their charge capacity after 300-500 charge/discharge cycles. However, batteries rarely produce this level of performance, with charge storage capacity sometimes reduced to 80% levels within only 100 cycles.

Fortunately, we can extend our future battery capacity by limiting how much we discharge our mobile phone batteries. With most battery degradation occurring during deep discharge/charge cycles, it is actually better to limit the battery discharge during any one cycle before charging it again.

As it happens, our devices do have battery-management systems, which reduce damage from overcharging and shut down automatically if the battery gets too low.

Nonetheless, to maximise the battery capacity in the future we should avoid that 0% battery mark altogether, while also keeping those batteries at least partially charged if storing them for a prolonged period of time to avoid deep discharge.

Extend charging times

Many of today’s mobile devices have a fast charge option that enables users to supercharge them in minutes rather than hours. This is convenient when we’re in a rush, but should be avoided otherwise.

Why? Because charging a battery too quickly reduces its storage capacity.

Physically, the shuttling of lithium metal and lithium ions between the electrodes in lithium-ion batteries is a slow process. Therefore, charging at lower rates allows more complete shuttling to occur, which enhances the battery’s charge capacity.

For example, charging a phone in five minutes compared with the standard two hours can reduce the battery capacity for that charge cycle by more than 20%.


Read more: How to make batteries that last (almost) forever


Keep the temperature just right

Fortunately, for most parts of the country, temperatures in Australia sit between 0℃ and 45℃ throughout the year. This is the exact range in which lithium-ion batteries can be stored to maintain optimal long-term charge capacity.

Below 0℃, the amount of power available within the battery system is reduced because of a restriction in the movement of lithium metal and lithium ions within the electrodes and through the electrolyte.

Above 45℃, the amount of power available is actually enhanced compared with lower temperatures, so you can get a little more “juice” from your battery under hotter conditions. However, at these temperatures the degradation of the battery is also greatly accelerated, so over an extended period of time its ability to store charge will be reduced.

As a result, phones should be kept out of direct sunlight for prolonged periods, especially in summer when surface temperatures can increase to above 70℃.

Mobile phones only have a limited number of charge cycles before the battery loses its capacity to recharge entirely.
www.shutterstock.com

Use battery-saving modes

Aaron Carroll and Gernot Heiser from Data61 analysed the power consumption of different smartphone components under a range of typical scenarios.

They concluded there are a handful of simple software and hardware strategies that can be used to preserve battery life.

  • Reduce screen brightness. The easiest way to conserve battery life while maintaining full function is to reduce the brightness of the screen. For devices such as mobile phones that have an organic light emitting diode (OLED) display, you can also use the “light on dark” option for viewing.

  • Turn off the cellular network or limit talk time. The connection to the cellular network uses the global system for mobile communication (GSM) module. The GSM is the most dominant energy-consuming component in a mobile phone, so it is beneficial to turn it off altogether or at least limit call time.

  • Use Wi-Fi, not 4G. With Wi-Fi being up to 40% less power-hungry than 4G for internet browsing, turning off cellular data and using Wi-Fi instead will help your battery life.

  • Limit video content. Video processing is one of the most power-consuming operations on a mobile device.

  • Turn on smart battery modes. All modern mobile devices have a smart battery saving mode (for instance, Android has Power Saving Mode and iOS has Low Power Mode). These software features modify central processing unit (CPU) usage for different apps, screen brightness, notifications and various hardware options to reduce energy consumption.

  • Use Airplane mode. This mode typically disables GSM, Wi-Fi, bluetooth and GPS functions on your devices. When turning off all such auxiliary functions, the device will use only up to 5% of its usual energy consumption with the screen off. For comparison, simply having your device in idle can still use more than 15%.

Enhancing your phone’s battery usability requires a combination of limiting the use of power-hungry hardware and software, as well as handling mobile devices so as to maximise the charge capacity and minimise battery degradation.

The ConversationBy adopting these simple strategies, users can extend their battery life by more than 40% in any given day while maintaining a more consistent battery capacity throughout the lifetime of the device.

Jacek Jasieniak, Associate Professor of Materials Science & Engineering and Director of the Monash Energy Materials & Systems Institute (MEMSI), Monash University

This article was originally published on The Conversation. Read the original article.

Cloud, backup and storage devices: how best to protect your data


Image 20170330 15619 l7vchv
How much data do you still store only on your mobile, tablet or laptop?
Shutterstock/Neirfy

Adnene Guabtni, Data61

We are producing more data than ever before, with more than 2.5 quintillion bytes produced every day, according to computer giant IBM. That’s a staggering 2,500,000,000,000 gigabytes of data and it’s growing fast. The Conversation

We have never been so connected through smart phones, smart watches, laptops and all sorts of wearable technologies inundating today’s marketplace. There were an estimated 6.4 billion connected “things” in 2016, up 30% from the previous year.

We are also continuously sending and receiving data over our networks. This unstoppable growth is unsustainable without some kind of smartness in the way we all produce, store, share and backup data now and in the future.

In the cloud

Cloud services play an essential role in achieving sustainable data management by easing the strain on bandwidth, storage and backup solutions.

But is the cloud paving the way to better backup services or is it rendering backup itself obsolete? And what’s the trade-off in terms of data safety, and how can it be mitigated so you can safely store your data in the cloud?

The cloud is often thought of as an online backup solution that works in the background on your devices to keep your photos and documents, whether personal or work related, backed up on remote servers.

In reality, the cloud has a lot more to offer. It connects people together, helping them store and share data online and even work together online to create data collaboratively.

It also makes your data ubiquitous, so that if you lose your phone or your device fails you simply buy a new one, sign in to your cloud account and voila! – all your data are on your new device in a matter of minutes.

Do you really back up your data?

An important advantage of cloud-based backup services is also the automation and ease of use. With traditional backup solutions, such as using a separate drive, people often discover, a little too late, that they did not back up certain files.

Relying on the user to do backups is risky, so automating it is exactly where cloud backup is making a difference.

Cloud solutions have begun to evolve from online backup services to primary storage services. People are increasingly moving from storing their data on their device’s internal storage (hard drives) to storing them directly in cloud-based repositories such as DropBox, Google Drive and Microsoft’s OneDrive.

Devices such as Google’s Chromebook do not use much local storage to store your data. Instead, they are part of a new trend in which everything you produce or consume on the internet, at work or at home, would come from the cloud and be stored there too.

Recently announced cloud technologies such as Google’s Drive File Stream or Dropbox’s Smart Sync are excellent examples of how cloud storage services are heading in a new direction with less data on the device and a bigger primary storage role for the cloud.

Here is how it works. Instead of keeping local files on your device, placeholder files (sort of empty files) are used, and the actual data are kept in the cloud and downloaded back onto the device only when needed.

Edits to the files are pushed to the cloud so that no local copy is kept on your device. This drastically reduces the risk of data leaks when a device is lost or stolen.

So if your entire workspace is in the cloud, is backup no longer needed?

No. In fact, backup is more relevant than ever, as disasters can strike cloud providers themselves, with hacking and ransomware affecting cloud storage too.

Backup has always had the purpose of reducing risks using redundancy, by duplicating data across multiple locations. The same can apply to cloud storage which can be duplicated across multiple cloud locations or multiple cloud service providers.

Privacy matters

Yet beyond the disruption of the backup market, the number-one concern about the use of cloud services for storing user data is privacy.

Data privacy is strategically important, particularly when customer data are involved. Many privacy-related problems can happen when using the cloud.

There are concerns about the processes used by cloud providers for privacy management, which often trade privacy for convenience. There are also concerns about the technologies put in place by cloud providers to overcome privacy related issues, which are often not effective.

When it comes to technology, encryption tools protecting your sensitive data have actually been around for a long time.

Encryption works by scrambling your data with a very large digital number (called a key) that you keep secret so that only you can decrypt the data. Nobody else can decode your data without that key.

Using encryption tools to encrypt your data with your own key before transferring it into the cloud is a sensible thing to do. Some cloud service providers are now offering this option and letting you choose your own key.

Share vs encryption

But if you store data in the cloud for the purpose of sharing it with others – and that’s often the precise reason that users choose to use cloud storage – then you might require a process to distribute encryption keys to multiple participants.

This is where the hassle can start. People you share data with would need to get the key too, in some way or another. Once you share that key, how would you revoke it later on? How would you prevent it from being re-shared without your consent?

More importantly, how would you keep using the collaboration features offered by cloud providers, such as Google Docs, while working on encrypted files?

These are the key challenges ahead for cloud users and providers. Solutions to those challenges would truly be game-changing.

Adnene Guabtni, Senior Research Scientist/Engineer, Data61

This article was originally published on The Conversation. Read the original article.