China: Boot Camp for Internet Addicts


Vital Signs: NBN’s new price plans are too little, too late


Lack of speed kills: finally NBN Co is thinking about a genuinely 21st century offering for customers.
http://www.shutterstock.com

Richard Holden, UNSW

This week NBN Co announced pricing changes for the National Broadband Network.

It includes a new plan boasting a download speed of 1 gigabit per second and an upload speed of 50 megabits per second for $80 a month.

These are 20-fold improvements on the maximum NBN speeds now. Almost a decade since the first customers were connected, NBN Co is thinking about a genuinely 21st century offering in terms of speed and price.

The NBN is late, over budget and slow. Australia places 58th globally for fixed-line broadband speed. Not only do the NBN’s advertised speeds lag international standards but the actual speeds often don’t come close to what is promised.




Read more:
Logged out: farmers in Far North Queensland are being left behind by the digital economy


Customer interest as a result has been unenthusiastic. NBN Co may well need to take a massive write-down on its assets because they don’t look like they’re worth A$50 billion.

All of this was entirely predictable, based on politicians failing to remember three basic lessons from Economics 101.

1: Technology often outstrips imagination

The history of innovation is littered with examples of remarkably important things being invented with no clear purpose in mind, or by accident, and then exceeding our wildest expectations.

Penicillin and vulcanised rubber (which led to the tyre for automobiles) were both invented by accident. The world wide web was developed as a means of communication among particle physicists. Most of us carry around in our pocket a computer (mobile phone) roughly as powerful than the world’s faster supercomputer circa 1985. Those have turned out to be pretty useful.

When the Coalition decided to scuttle Labor’s NBN plan for fibre-optic cable to every premises, on the basis that “fibre-to-the-node” and using existing copper telephone wires to the premises would be much cheaper, this is what the chief spruiker of the Coalition’s NBN plan, Malcolm Turnbull, said about broadband needs in 2010:

There isn’t much or anything you can do with 100 Mbps that you can’t do with 12 Mbps for residential customers.

The breathtaking lack of insight and imagination in this comment is responsible in no small part for the Flintstonian broadband infrastructure Australia now has.

Prioritising speed of roll-out (which hasn’t even happened) over speed of internet (which sure has happened) was a massive mistake.

2: Positives justify subsidies

You having fast internet is good for me when we connect. When consumers can connect quickly to a business’s website that’s good for the business. It makes it more profitable for businesses to invest in their internet operations. This has benefits for other consumers and even other businesses.

A great illustration of this is in Dunedin, New Zealand, where there have been all sorts of business-to-business spillovers from the city having the fastest internet speeds in Australasia. The ABC’s Four Corners program has highlighted how this has revolutionised New Zealand’s video-game development industry, among other things.

Economists call spillover effects to third parties externalities. Pollution is a negative externality, while the benefit of fast internet is a positive externality.




Read more:
Digital inclusion in Tasmania has improved in line with NBN rollout – will the other states follow?


A sound business model for the NBN ought to recognise the positive externalities and ensure they are incorporated into the price mechanism, by offering a partial subsidy to encourage people to sign up. Like the reverse of a carbon price.

One of the NBN’s key problems is the way successive governments structured national investment in it. Setting up NBN Co as a quasi-corporate entity needing to make a commercial rate of return on the roughly A$50 billion investment in the network was a huge mistake. It was the opposite of providing a subsidy.

The telecommunications companies who retail the NBN have complained that NBN Co’s wholesale price points mean it is hard for resellers to make a profit. It’s a kind of quality death spiral: an unattractive product means fewer people buy it, leading to the product getting worse, leading to even fewer people buying it.

3: Uniform pricing doesn’t work

Finally, it’s never a good idea to charge everyone the same price when there are different costs to serve different people.

The idea was that higher returns from easy-to-service city homes would subsidise the higher costs of service homes in regional and remote areas. But city homes, precisely because they are cheaper to service, have other options. If not enough city customers signed up to the NBN, prices would be driven up, making the network even less attractive to city customers. It’s textbook adverse selection, just like in health-insurance markets.

The government tried to get around this by banning competition. But that’s never really possible, especially from technologies not yet invented. Like 5G. The 2010 business case assumed no more than 16% of households would go wireless. Oops.

As economic journalist Peter Martin wrote in 2011:

NBN will never make a return on the cost of its capital or meet its customer targets if it faces competition. Its corporate plan says so, at point 1: “The plan assumes effective regulatory protection to prevent opportunistic cherry picking […] the viability of the project is dependent upon this protection.”

What to do from here

Multiple governments have bungled the NBN. But there is a way to salvage things – a bit.

Holding constant the technology (fibre-to-the-node), the best thing the government could do is write down its investment massively – ideally so low that it can flog NBN Co off to someone who can be subject to access regulation – ensuring, like other utilities, ownership of infrastructure doesn’t stymie competition – and make a modest rate of return.




Read more:
What should be done with the NBN in the long run?


Our super funds are always sticking up their hands for infrastructure investment. This would be a good one.

Ideally, though, the technology should be fixed. Fibre-to-the-premises was always going to be expensive, but it was also going to be fast, and as future-proof as we could get.

Lack of imagination and inability to think past 12 Mbps less than ten years ago should not hold the nation back now.The Conversation

Richard Holden, Professor of Economics, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

30 years since Australia first connected to the internet, we’ve come a long way



Out of the science labs, our internet connectivity is now part of our everyday lives.
Shutterstock/AngieYeoh

Justin Zobel, University of Melbourne

This article is part of our occasional long read series Zoom Out, where authors explore key ideas in science and technology in the broader context of society and humanity.


When Australia joined the global internet on June 23, 1989 – via a connection made by the University of Melbourne – it was mostly used by computer scientists.

Three decades later, more than 86% of Australian households are connected to the internet.

But it was a slow start. At first, network capacity was limited to very small volumes of information.

This all changed thanks to the development of vastly more powerful computers, and other technologies that have transformed our online experience.

One of those technologies is probably in front of you now: the screen.

Look at how you view the web, email and apps today: not just on large desktop screens but also handheld devices, and perhaps even an internet-connected wristwatch.

This was barely imaginable 30 years ago.

Today you can get share price updates on your internet connected Apple Watch.
Flickr/Shinya Suzuki, CC BY-ND

Connected to the world

By the time Australia first connected, the internet had been developing for 20 years. The very first network had been turned on in the United States in 1969.

Australia too had networks during the 1980s, but distance and a lack of interest from commercial providers meant these were isolated from the rest of the world.

This first international link provided just 56 kilobits of national connectivity. A 20th of a megabit for the whole country! That is not even enough to play a single piece of music from a streaming service (encoded at 128kbs), and it would take a week for a movie to be transferred to Australia.

But at that time digital music, video and images were not distributed online. Nor was the internet servicing a large community. Most of the users were academics or researchers in computer science or physics.

With continuous connection came live access. The most immediate impact was that email could now be delivered immediately.

At first, email and internet news groups (discussion forums) were the main traffic, but the connection also gave access to information sharing services such as Archie (an old example here) and WAIS, which were mostly used to share software.

There was connection too, in principle at least, to the newly created world wide web, which in June 1989 was just three months old and largely unknown. It wouldn’t become significant for another four years or so.

An early version of the first web page.
CERN/Screengrab

This turning-on of a connection was not a “light in a darkened room” moment, in which we suddenly had access to the resources that are now so familiar to us.

But it was a crucial step, one of several developments maturing in parallel that created the technology that has so drastically transformed our society, commerce and daily lives. Within just a few years we were surfing the web and sending email from home.

The technology develops

The first of these developments was the internet itself, which was and is a cobbling-together of disparate networks around the globe.

Australia had several networks, ranging from the relatively open ACSNET (now called AARNET) created by computer science departments to connect universities to, at the other extreme, proprietary, secure networks operated by defence and industry.

When Melbourne opened that first link, it provided a bridge from ACSNET to the networks in the United States and from there to the rest of the world.

Just as important were developments in the underlying technology. At the time, the capacity of the networks was adequate – just. As the community of users rapidly grew, it sometimes seemed as though the internet might utterly break down.

By the mid-1990s bandwidth (the volume of digital traffic that a network can carry) increased to an extent that earlier had seemed unimaginable. This provided the data transmission infrastructure the web would come to demand.

Another development was computing hardware. Computers were doubling in speed every 18 months, as had been predicted. They also became much cheaper.

A Macintosh desktop computer from 1985.
Flickr/Luke Jones, CC BY

Computer disks were also growing in capacity, doubling in size every year or so. The yet-to-appear web would require disk space for storage of web pages, and compute capacity for running servers, which are applications that provide a door into a computer, giving users remote access to data and software.

In the 1980s these had been scarce, expensive resources that would have been overwhelmed by even small volumes of web traffic. By the early 1990s growth in capacity could – just – accommodate the demand that suddenly appeared and homes were being connected, via dial-up at first.

Dial-up internet connection.
SoundBible/ezwa339 KB (download)

A new operating system

But it is a third concurrent development that is, to me, the most remarkable.

This is the emergence of the UNIX operating system and of a community of people who collaboratively wrote UNIX-based code for free (yes, for no charge). Their work provided what is arguably the core of the systems that underpin the modern world.

UNIX was created by Dennis Ritchie, Ken Thompson and a small number of colleagues at AT&T Bell Labs, in the US, from 1970.

Ken Thompson and Dennis Richie with DEC PDP-11 system running UNIX.
Wikimedia/Peter Hamer, CC BY-SA

At that time, operating systems (like iOS on today’s Apple phones) were limited to a single type of computer. Code and programs could not be used across machines from different manufacturers.

UNIX, in contrast, could be used on any suitable machine. This is the reason UNIX variants continue to provide the core of Apple Mac computers, Android phones, systems such as inflight entertainment and smart TVs, and many billions of other devices.

The open source movement

Along with UNIX came a culture of collaborative code development by programmers. This was initially via sharing of programs sent on tape between institutions as parcels in the mail. Anyone with time to spare could create programs and share them with a community of like-minded users.

This became known as the open source movement. Many thousands of people helped develop software of a diversity and richness that was beyond the resources of any single organisation. And it was not driven by commercial or corporate needs.

Programs could embody speculative innovations, and any developer who was frustrated by errors or shortcomings in the tools they used could update or correct them.

A key piece of open source software was the server, a computer system in a network shared by multiple users. Providing anonymous users with remote access was far from desirable for commercial computers of the era, on which use of costly computing time was tightly controlled.

But in an academic, sharing, open environment such servers were a valuable tool, at least for computer scientists, who were the main users of university computers in that era.

Another key piece of open source software was the router, which allowed computers on a network to collaborate in directing network requests and responses between connected machines anywhere on the planet.

Servers had been used for email since the beginnings of the internet and initially it was email, delivered with the help of routers, that brought networked desktop computing into homes and businesses.

When the web was proposed, extending these servers to allow the information from web page servers to be sent to a user’s computer was a small step.

What you looking at?

The last component is so ubiquitous that we forget what is literally before our eyes: the screen.

The Macintosh Plus had a screen resolution of 512×342 pixels.
Flickr/raneko, CC BY

Affordable computer displays in the 1980s were much too limited to pleasingly render a web page, with resolutions of 640×480 pixels or lower, with crude colours or just black and white. Better screens, starting at 1024×768, first became widely available in the early 1990s.

Only with the appearance of the Mosaic browser in 1993 did the web become appealing, with a pool of about 100 web sites showing how to deliver information in a way that for most users was new and remarkably compelling.

How things have changed.

The online world continues to grow and develop with access today via cable, wireless and mobile handsets. We have internet-connected services in our homes, cars, health services, government, and much more. We live-stream our music and video, and share our lives online.

But the origin of that trend of increasing digitisation of our society lies in those simple beginnings – and the end is not yet in sight.


This article was amended at the request of the author to correct the amount of data accessible from the initial link.The Conversation

Justin Zobel, Pro Vice-Chancellor, Graduate & International Research, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The behavioural economics of discounting, and why Kogan would profit from discount deception



The consumer watchdog has accused Kogan Australia of misleading customers, by touting discounts on more than 600 items it had previously raised the price of.
http://www.shutterstock.com

Ralph-Christopher Bayer, University of Adelaide

Kogan Australia has grown from a garage to an online retail giant in a little more than a decade. Key to its success have been its discount prices.

But apparently not all of those discounts have been legit, according to the Australian Competition and Consumer Commission.

The consumer watchdog has accused the home electronics and appliances retailer of misleading customers, by touting discounts on more than 600 items whose prices it had sneakily raised by at least the same percentage.

Kogan is yet to have its day in court, so we won’t dwell on its case specifically.

The ACCC alleges Kogan’s ‘TAXTIME’ promotion offered a 10% discount on items whose prices had all been raised by the equivalent percentage.
ACCC

But the scenario does raise an interesting question. How effective are these types of price manipulation? After all, checking and comparing prices is dead easy online. So what could a retailer possibly gain?

Well, as it happens, potentially quite a lot.

Because consumers are human beings, our actions aren’t necessarily rational. We have strong emotional reactions to price signals. The sheer ubiquity of discounts demonstrate they must work.

Lets review a couple of findings from behavioural (and traditional) economics that help explain why discounting – both real and fake – is such an effective marketing ploy.

Save! Save! Save!

In standard economics, consumers are assumed to base their purchasing decisions on absolute prices. They make “rational” decisions, and the “framing” of the price does not matter.

Psychologists Daniel Kahneman and Amos Tversky challenged this assumption with their insights into consumer behaviour. Their best-known contribution to behavioural economics is “prospect theory” – a psychologically more realistic alternative to the classical theory of rational choice.

Kahneman and Tversky argued that behaviour is based on changes, which were relative. Framing a price as involving a discount therefore influences our perception of its value.

The prospect of buying something leads us to compare two different changes: the positive change in perceived value from taking ownership of a good (the gain); and the negative change experienced from handing over money (the loss). We buy if we perceive the gain to outweigh the loss.

Suppose you are looking to buy a toaster. You see one for $99. Another is $110, with a 10% discount – making it $99. Which one would you choose?

Evaluating the first toaster’s value to you is reasonably straightforward. You will consider the item’s attributes against other toasters and how much you like toast versus some other benefit you might attain for $99.

Standard economics says your emotional response involves weighing the loss of $99 against the gain of owning the toaster.

For the second toaster you might do all the same calculations about features and value for money. But behavioural economics tells us the discount will provoke a more complex emotional reaction than the first toaster.

Research shows most of us will tend to “segregate” the price from the discount; we will feel separately the emotion from the loss of spending $99 and the gain of “saving” $11.

Economist Richard Thaler demonstrated this in a study involving 87 undergraduate students at Cornell University. He quizzed them on a series of scenarios like the following:

Mr A’s car was damaged in a parking lot. He had to spend $200 to repair the damage. The same day the car was damaged, he won $25 in the office football pool.
Mr B’s car was damaged in a parking lot. He had to spend $175 to repair the damage.
Who was more upset?

Just five students said both would be equally upset, while 63 (more than 72%) said Mr B. Similar hypotheticals elicited equally emphatic results.

Economists now refer to this as the “silver lining effect” – segregating a small gain from a larger loss results in greater psychological value than integrating the gain into a smaller loss.

The result is we feel better handing over money for a discounted item than the same amount for a non-discounted item.

Must end soon!

Another behavioural trick associated with discounts is creating a sense of urgency, by emphasising the discount period will end soon.

Again, the fact people typically evaluate prospects as changes from a reference point comes into play.

The seller’s strategy is to shift our reference points so we compare the current price with a higher price in the future. This makes not buying feel like a future loss. Since most humans are loss-averse, we may be nudged to avoid that loss by buying before the discount expires.

Expiry warnings also work through a second behavioural channel: anticipated regret.

Some of us are greatly influenced to behave according to whether we think we will regret it in the future.

Economic psychologist Marcel Zeelenberg and colleagues demonstrated this in experiments with students at the University of Amsterdam. Their conclusion: regret-aversion better explains choices than risk-aversion, because anticipation of regret can promote both risk-averse and risk-seeking choices.

Depending to what extent we have this trait, an expiry warning can compel us to buy now, in case we need that item in the future and will regret not having taken the opportunity to buy it when discounted.

Discounting is thus an effective strategy to get us to buy products we actually don’t need.

Look no further!

But what about the fact that it is so easy to compare prices online? Why doesn’t this fact nullify the two effects we’ve just discussed?

Here the standard economics of consumer search would agree that consumers might be misled despite being perfectly rational.

If a consumer judges a discount promotion is genuine, they have a tendency to assume it is less likely they will find a lower price elsewhere. This belief makes them less likely to continue searching.

In experiments on this topic, my colleague Changxia Ke and I have found a discernible “discount bias”. The effect is not necessarily large, depending on circumstances, but even a small nudge towards choosing a retailer with discounted items over another could end up being worth millions.




Read more:
Why consumers fall for ‘sales’, but companies may be using them too much


Once a consumer has made a decision and bought an item, they are even less likely to search for prices. They therefore may never learn a discount was fake.

There are entire industries where it is general practice to frame prices this way. Paradoxically, because this makes consumers search less for better deals, it allows all sellers to charge higher prices.

The bottom line: beware the emotional appeal of the discount. Whether real or fake, the human tendency is to overrate them.The Conversation

Ralph-Christopher Bayer, Professor of Economics , University of Adelaide

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Online tools can help people in disasters, but do they represent everyone?



Social media helped some people cope with the Townsville floods earlier this year.
AAP Image/Andrew Rankin

Billy Tusker Haworth, University of Manchester; Christine Eriksen, University of Wollongong, and Scott McKinnon, University of Wollongong

With natural hazard and climate-related disasters on the rise, online tools such as crowdsourced mapping and social media can help people understand and respond to a crisis. They enable people to share their location and contribute information.

But are these tools useful for everyone, or are some people marginalised? It is vital these tools include information provided from all sections of a community at risk.

Current evidence suggests that is not always the case.




Read more:
‘Natural disasters’ and people on the margins – the hidden story


Online tools let people help in disasters

Social media played an important role in coordinating response to the 2019 Queensland floods and the 2013 Tasmania bushfires. Community members used Facebook to coordinate sharing of resources such as food and water.

Crowdsourced mapping helped in response to the humanitarian crisis after the 2010 Haiti earthquake. Some of the most useful information came from public contributions.

Twitter provided similar critical insights during Hurricane Irma in South Florida in 2017.

Research shows these public contributions can help in disaster risk reduction, but they also have limitations.

In the rush to develop new disaster mitigation tools, it is important to consider whether they will help or harm the people most vulnerable in a disaster.

Who is vulnerable?

Extreme natural events, such as earthquakes and bushfires, are not considered disasters until vulnerable people are exposed to the hazard.




Read more:
Understanding the root causes of natural disasters


To determine people’s level of vulnerability we need to know:

  1. the level of individual and community exposure to a physical threat
  2. their access to resources that affect their capacity to cope when threats materialise.

Some groups in society will be more vulnerable to disaster than others. This includes people with immobility issues, caring roles, or limited access to resources such as money, information or support networks.

When disaster strikes, the pressure on some groups is often magnified.

The devastating scenes in New Orleans after Hurricane Katrina in 2005 and in Puerto Rico after Hurricane Maria in 2017 revealed the vulnerability of children in such disasters.

Unfortunately, emergency management can exacerbate the vulnerability of marginalised groups. For example, a US study last year showed that in the years after disasters, wealth increased for white people and declined for people of colour. The authors suggest this is linked to inequitable distribution of emergency and redevelopment aid.

Policies and practice have until recently mainly been written by, and for, the most predominant groups in our society, especially heterosexual white men.

Research shows how this can create gender inequities or exclude the needs of LGBTIQ communities, former refugees and migrants or domestic violence victims.




Read more:
More men die in bushfires: how gender affects how we plan and respond


We need to ask: do new forms of disaster response help everyone in a community, or do they reproduce existing power imbalances?

Unequal access to digital technologies

Research has assessed the “techno-optimism” – a belief that technologies will solve our problems – associated with people using online tools to share information for disaster management.

These technologies inherently discriminate if access to them discriminates.

In Australia, the digital divide remains largely unchanged in recent years. In 2016-17 nearly 1.3 million households had no internet connection.

Lower digital inclusion is seen in already vulnerable groups, including the unemployed, migrants and the elderly.

Global internet penetration rates show uneven access between economically poorer parts of the world, such as Africa and Asia, and wealthier Western regions.

Representations of communities are skewed on the internet. Particular groups participate with varying degrees on social media and in crowdsourcing activities. For example, some ethnic minorities have poorer internet access than other groups even in the same country.

For crowdsourced mapping on platforms such as OpenStreetMap, studies find participation biases relating to gender. Men map far more than women at local and global scales.

Research shows participation biases in community mapping activities towards older, more affluent men.

Protect the vulnerable

Persecuted minorities, including LGBTIQ communities and religious minorities, are often more vulnerable in disasters. Digital technologies, which expose people’s identities and fail to protect privacy, might increase that vulnerability.

Unequal participation means those who can participate may become further empowered, with more access to information and resources. As a result, gaps between privileged and marginalised people grow wider.

For example, local Kreyòl-speaking Haitians from poorer neighbourhoods contributed information via SMS for use on crowdsourced maps during the 2010 Haiti earthquake response.

But the information was translated and mapped in English for Western humanitarians. As they didn’t speak English, vulnerable Haitians were further marginalised by being unable to directly use and benefit from maps resulting from their own contributions.

Participation patterns in mapping do not reflect the true makeup of our diverse societies. But they do reflect where power lies – usually with dominant groups.

Any power imbalances that come from unequal online participation are pertinent to disaster risk reduction. They can amplify community tensions, social divides and marginalisation, and exacerbate vulnerability and risk.

With greater access to the benefits of online tools, and improved representation of diverse and marginalised people, we can better understand societies and reduce disaster impacts.

We must remain acutely aware of digital divides and participation biases. We must continually consider how these technologies can better include, value and elevate marginalised groups.The Conversation

Billy Tusker Haworth, Lecturer in GIS and Disaster Management, University of Manchester; Christine Eriksen, Senior Lecturer in Geography and Sustainable Communities, University of Wollongong, and Scott McKinnon, Vice-Chancellor’s Postdoctoral Research Fellow, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Blocking Huawei from Australia means slower and delayed 5G – and for what?


Stanley Shanapinda, La Trobe University

The United States and Australia are deliberately restricting the place of Chinese telco Huawei in their telecommunications landscapes.

We’re told these changes will be worth it from a security point of view.




Read more:
What is a mobile network, anyway? This is 5G, boiled down


But Huawei infrastructure is already ubiquitous in telecommunications networks, and we have other avenues available to us if we’re concerned about cybersecurity.

In the end, halting involvement of Huawei in Australia will be felt directly by customers. We will have to be satisfied with below-par 5G internet speeds and delayed service rollouts.

And we probably won’t be able to use Google Play on Huawei smart phones after 2020.

Huawei offers the best 5G

5G is a mobile phone network that promises top speeds, especially in highly populated areas. Australia has been expecting the network to be broadly up and running by around 2020 – there is limited availability in some central business districts right now.

Top 5G speeds can reach up to 10 gigabits per second, 20 times faster than 4G. This means movie downloads in a matter of seconds – as opposed to minutes with 4G. A mobile phone, gaming laptop or smart TV can communicate with a 5G network at a response speed of 1 millisecond, as opposed to 30 milliseconds with 4G.

Huawei, the world’s biggest manufacturer of telecommunications equipment, is leading the 5G race. The Chinese company is around 12 months ahead of its competitors Nokia and Ericsson.

Huawei has been involved in providing 3G and 4G services in Australia since 2004 – reportedly working with Vodafone and Optus, but not Telstra or NBN Co. Huawei built a private 4G network for mining company Santos, and digital voice and data communication systems for rail services in Western Australia and New South Wales. This includes radio masts, base stations and handheld radios, but not the core network.

But Huawei was restricted from participating in future development of Australia’s and the US’s telecommunications networks from August 2018 and May 2019, respectively.

This stems from apparent Australian and US government concerns that Huawei infrastructure could allow the Chinese government to collect foreign intelligence and sensitive information, and sabotage economic interests.




Read more:
US ban on Huawei likely following Trump cybersecurity crackdown – and Australia is on board


Costs passed on to consumers

Australia’s telecommunications networks have already felt the impact of the Coalition’s Telecommunications Sector Security Reforms announced in August 2018.

These reforms “place obligations on telecommunications companies to protect Australian networks from unauthorised interference or access that might prejudice our national security”.

The guidance effectively put the companies on notice, implying that use of Huawei could violate cybersecurity laws. No company wants to be in such a position. Continuing with Huawei after being informed that the company may pose a national security risk could bring legal and reputational risks.

The result is companies such as Optus and Vodafone were left scrambling to re-negotiate 5G testing and rollout plans that had been in the works since 2016. Optus has already delayed its 5G roll out.

Most operators do use additional manufacturers such as Nokia and Ericsson for networks and testing. But it’s already clear from cases in Europe that such companies have been slow to release equipment that is as advanced as Huawei’s.

Costs incurred by such changes and the delays in rolling out high-quality services are absorbed by mobile phone companies in the first instance, and eventually passed on to the consumer.

Given existing frustrations with the NBN, customers will continue to wait longer and may have to pay more for top 5G services.

Customers who prefer to use Huawei-made phones could be hit with a double whammy. Recent actions by Google to suspend business operations with Huawei could prevent these customers from having access to Google Play (the equivalent of Apple’s app store on Android devices) in the future.

Huawei is already here

It’s no secret that China’s foreign intelligence-gathering over the internet is increasing.

But it’s doubtful Huawei has assisted such efforts. Technical flaws detected in Italy are reported to be normal in the sector and not due to a backdoor.

Germany has decided to introduce a broad regulatory regime that requires suppliers of 5G networks to be trustworthy, and provide assured protection of information under local laws.

A similar approach in Australia would require telecommunications equipment to be tested before installation, and at regular intervals after installation for the lifetime of the network, under a security capability plan the supplier is required to submit.




Read more:
What skills does a cybersecurity professional need?


More broadly speaking, the Coalition has pledged A$156 million to cybersecurity, aimed at developing skills to defend against cyber intrusions and to improve the capabilities of the Australian Cyber Security Centre (ACSC). These plans could reasonably be timed with the expected launch of 5G at the end of 2020.

Added to this, the 2018 Assistance and Access Act – commonly referred to as the Encryption Bill – already requires all telecommunications manufacturers to protect their networks and assist national security and law enforcement agencies to share information. Huawei is subject to this legal obligation.

If there are security fears about 5G, those same fears would exist in respect of 4G that has been installed and is supported by Huawei in this country for more than a decade.

It’s not clear what we gain by blocking Huawei’s involvement in Australia’s 5G network.The Conversation

Stanley Shanapinda, Research Fellow, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

View from The Hill: Victorian Liberal candidates find social media footprints lethal


Michelle Grattan, University of Canberra

Whether or not it’s some sort of record, the Liberals’ loss of two Victorian candidates in a single day is way beyond what Oscar Wilde would have dubbed carelessness.

Already struggling in that state, the Victorian Liberals managed to select one candidate who, judged on his words, was an appalling Islamophobe and another who was an out-and-out homophobe.

The comments that have brought them down weren’t made in the distant past – they date from last year.

Jeremy Hearn was disendorsed as the party’s candidate for the Labor seat of Isaacs, after it came to light that he had written, among other things, that a Muslim was someone who subscribed to an ideology requiring “killing or enslavement of the citizens of Australia if they do not become Muslim”. This was posted in February 2018.

Peter Killin, who was standing in Wills, withdrew over a comment (in reply to another commenter) he posted in December that included suggesting Liberal MP Tim Wilson should not have been preselected because he’s gay.

Scott Morrison rather quaintly explained the unfortunate choice of Killin by saying “he was a very recent candidate who came in because we weren’t able to continue with the other candidate because of section 44 issues”.

Oops and oops again. First the Victorian Liberals pick someone who didn’t qualify legally and then they replaced that candidate with one who didn’t qualify under any reasonable test of community standards.




Read more:
Politics with Michelle Grattan: Tim Colebatch on the battle in Victoria – and the Senate


It’s not just the Liberals with problems of candidates with unacceptable views, or bad behaviour.

Labor’s Northern Territory number 2 Senate candidate Wayne Kurnoth, who shared anti-Semitic material on social media, recently stood down. Bill Shorten embarrassed himself on Wednesday by saying he hadn’t met the man, despite having been filmed with him.

Then there’s the case of Luke Creasey, the Labor candidate running in Melbourne, which is held by Greens Adam Bandt, who shared rape jokes and pornographic material on social media. He has done a mea culpa, saying his actions happened “a number of years ago” and “in no way reflect the views I hold today”. Creasey still has his endorsement. Labor Senate leader Penny Wong has defended him, including by distinguishing between a “mistake” and “prejudice”.

It should be remembered that, given the post-nomination timing, these latest candidates unloaded by their parties have not lost their spots or their party designations on the ballot paper.

As Antony Green wrote when a NSW Liberal candidate had to withdraw during the state election (after a previous association with an online forum which reportedly engaged in unsavoury jokes) “the election goes ahead as if nothing had happened”.

It won’t occur this time, but recall the Pauline Hanson experience. In 1996, the Liberals disendorsed Hanson for racist remarks but she remained on the ballot paper with the party moniker. She was duly elected – and no doubt quite a few voters had thought she was the official Liberal candidate.

What goes around comes around – sort of.

This week Hanson’s number 2 Queensland Senate candidate, Steve Dickson, quit all his party positions after footage emerged of his groping and denigrating language at a Washington strip club. But Dickson is still on the Senate ballot paper.

While the latest major party candidates have been dumped for their views, this election has produced a large number of candidates who clearly appear to be legally ineligible to sit in parliament.

Their presence is despite the fact that, after the horrors of the constitution’s section 44 during the last parliament, candidates now have to provide extensive details for the Australian Electoral Commission about their eligibility.

Although the AEC does not have any role of enforcing eligibility, the availability of this data makes it easier in many cases to spot candidates who have legal question marks.

Most of the legally-dubious candidates have come from minor parties, and these parties, especially One Nation, Palmer’s United Australia Party and Fraser Anning’s Conservative National Party are getting close media attention.

When the major parties discovered prospective candidates who would hit a section 44 hurdle – and there have been several – they quickly replaced them.

But the minor parties don’t seem too worried about eligibility. While most of these people wouldn’t have a hope in hell of being elected, on one legal view there is a danger of a High Court challenge if someone was elected on the preferences of an ineligible candidate.

The section 44 problems reinforce the need to properly fix the constitution, as I have argued before. It will be a miracle if it doesn’t cause issues in the next parliament, because in more obscure cases a problem may not be easy to spot.




Read more:
View from The Hill: Section 44 remains a constitutional trip wire that should be addressed


But what of those with beyond-the-pale views?

At one level the fate of the two Victorian Liberal candidates carries the obvious lesson for aspirants: be careful what you post on social media, and delete old posts.

That’s the expedient point. These candidates were caught out by what they put, and left, online.

But there is a deeper issue. Surely vetting of candidates standing for major parties must properly require a very thorough examination of their views and character.

Admittedly sometimes decisions will not be easy – judgements have to be made, including a certain allowance in the case of things said or done a long time before (not applicable with the two in question).

But whether it is the factional nature of the Victorian division to blame for allowing these candidates to get through, or the inattention of the party’s powers-that-be (or likely a combination of both) it’s obvious that something went badly wrong.

That they were in unwinnable seats (despite Isaacs being on a small margin) should be irrelevant. All those who carry the Liberal banner should be espousing values in line with their party, which does after all claim to put “values” at the heart of its philosophy. The same, of course, goes for Labor.The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Goodbye Google+, but what happens when online communities close down?



File 20190403 177184 jfjjy0.jpg?ixlib=rb 1.1
Google+ is the latest online community to close.
Shutterstock/rvlsoft

Stan Karanasios, RMIT University

This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.

If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.

Google+ is no longer available to users.
Google+/Screengrab

But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).




Read more:
Sexual subcultures are collateral damage in Tumblr’s ban on adult content


So are we witnessing the decline of online communities and user comments?

Turning off online communities and user generated content

One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.

This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.

Other popular websites have disabled their comments sections, including National Public Radio (NPR), The Atlantic, Popular Science and Reuters.

Why the closures?

Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.

In the case of Google+, low usage alongside data breaches appear to have sped up its decision.

NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.

This led NPR’s managing editor for digital news, Scott Montgomery, to say:

We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.

He said audiences had also moved to engage with NPR more on Facebook and Twitter.

Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.

In the case of IMDB closing its message boards in 2017, the reason given was:

[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.

The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.

User backlash

Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.

But most organisations take these aspects into their decision-making.

The petition to save IMDB’s message boards.
Change.org/Screengrab

For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.

Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.

In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.

This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).

So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.

The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.

It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.

Lost community knowledge

But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.

They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.

Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.The Conversation

Stan Karanasios, Senior Research Fellow, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New livestreaming legislation fails to take into account how the internet actually works


File 20190404 131404 ctpebk.jpg?ixlib=rb 1.1
The new laws could mean internet service providers could end up being forced to surveil the activities of users.
from www.shutterstock.com

Andre Oboler, La Trobe University

In response to the live streamed terror attack in New Zealand last month, new laws have just been passed by the Australian Parliament.

These laws amend the Commonwealth Criminal Code, adding two substantive new criminal offences.

Both are aimed not at terrorists but at technology companies. And how that’s done is where some of the new measures fall down.




Read more:
Livestreaming terror is abhorrent – but is more rushed legislation the answer?


The legislation was rushed through with neither consultation nor sufficient discussion.

The laws focus on abhorrent violent material, capturing the terrorist incident in New Zealand, but also online content created by a person carrying out a murder, attempted murder, torture, rape or violent kidnapping.

The laws do not cover material captured by third parties who witness a crime, only content from an attacker, their accomplice, or someone who attempts to join the violence.

The aim is to prevent perpetrators of extreme violence from using the internet to glorify or publicise what they have done. This will reduce terrorists’ ability to spread panic and fear. It will reduce criminals’ ability to intimidate. This is about taking away the tools harmful actors use to damage society.

What the legislation aims to do

Section 474.33 of the Criminal Code makes it a criminal offence for any internet service provider, content service or hosting service to fail to notify the Australian Federal Police, within a reasonable time, once they become aware their service is being used to access abhorrent violent material that occurred or is occurring in Australia. Failing to comply can result in a fine of 800 penalty units (currently $128,952).

Section 474.34 makes it a criminal offence for a content service or hosting service, whether inside or outside Australia, to fail to expeditiously take down material made available through their service and accessible in Australia.

The criminal element of fault is not that the service provider deliberately makes the material available, but rather that they are reckless with regards to identifying such content or providing access to it. Reckless, however, has been given a rather special meaning.

What we’ve got right

There is a clear need for new laws.

Focusing on regulating technology services is the right approach. Back in 2010 when I first raised this idea it was considered radical; today even Mark Zuckerberg supports government regulation.




Read more:
Zuckerberg’s ‘new rules’ for the internet must move from words to actions


We’ve moved away from the idea of technology companies of all types being part of a safe harbour that keeps the internet unregulated. That’s to be welcomed.

Penalties for companies that behave recklessly – failing to build suitable mechanisms to find and remove abhorrent violent material – are also to be welcomed. Such systems should indeed be expanded to cover credible threats of violence and major interference in a country’s sovereignty, such as efforts to manipulate elections or cause mass panics through fake news.

Recklessness as it is ordinarily understood – that is, failing to take the steps a reasonable person in the same position would take – allows the standard to slowly rise as technology and systems for responding to such incidents improve.

Also to be welcomed is the new ability for the eSafety Commissioner to issue a notice to a company identifying an item of abhorrent violent material and to demand its removal. When the government is aware of such content, there must be a way to require rapid action. The law does this.

Where we’ve fallen down

One potential problem with the legislation is the requirement for internet service providers (ISPs) to notify the Australian Federal Police if they are aware their service can be used to access any particular abhorrent violent material.

As ISPs provide access for consumers to everything on the internet, this seeks to turn ISPs into a national surveillance network. It has the potential to move us from an already problematic meta-data retention scheme into an expectation for ISPs to apply deep packet inspection monitoring of everything that is said.




Read more:
Australians accept government surveillance, for now


Content services (including social media platforms such as Facebook, YouTube and Twitter, and regular websites) and hosting services (provided by companies such as Telsta, Microsoft and Amazon through to companies like Servers Australia and Synergy Wholesale) have a more serious problem.

Under the new laws, if content is online at the time a notice is issued by the eSafety Commissioner, the legal presumption will be that the company was behaving recklessly at that time. The notice is not a demand to respond, but rather a finding that the response is already too slow. The relevant section (s 474.35(5)) states (emphasis added) that if a notice has been correctly issued:

…then, in that prosecution, it must be presumed that the person was reckless as to whether the content service could be used to access the specified material at the time the notice was issued

While the presumption can be rebutted, this is still quite different from what the Attorney General’s press release (dated 4 April 2019) claimed:

… the e-Safety Commissioner will have the power to issue notices that bring this type of material to the attention of social media companies. As soon as they receive a notice, they will be deemed to be aware of the material, meaning the clock starts ticking for the platform to remove the material or face extremely serious criminal penalties.

As the law is written, the notice is more of a notification that the clock has already run out of time. It’s like arguing that the occurrence of a terrorist act means “it must be presumed” the government was reckless with regards to prevention. That’s not a fair standard. The idea of the notice starting the clock would in fact be much fairer.

Under this law, a content service provider can be found to have been reckless and to have failed to expeditiously remove content even if no notice was ever issued. In some cases that may be a good thing, but what was passed as law, and what they say they intended, don’t appear to match.




Read more:
Why we need to fix encryption laws the tech sector says threaten Australian jobs


Hosting services have the worse of it. They provide the space on servers that allows content to appear on the internet. It’s a little like the arrangement between a landlord and a tenant. With hosting plans starting from around $50 a year, there’s no margin to cover monitoring and complaints management.

The new laws suggest hosting services will be acting recklessly if they don’t monitor their clients so they can take action before the eSafety Commissioner issues a notice. They just aren’t in a position to do that.

A lot still needs to be done

As it stands, only the expeditious removal of content or suspension of a client’s account can avoid the new offence. The legislation does not define what expeditious removal means. There is nothing to suggest the clock would start only after the service provider becomes aware of the content, and the notice from the eSafety Commissioner doesn’t start a clock but says a response is already over due.

This law is designed to apply pressure on companies so they improve their response times and take preemptive action.

What’s missing too is a target with safe harbour protections, that is, a clear standard and a rule that says if companies can meet that standard they can enjoy an immunity from prosecution under this law. That would give companies both a goal and an incentive to reach it.




Read more:
Technology and regulation must work in concert to combat hate speech online


Also missing is a way to measure response times. If we can’t measure it, we can’t push for it to be continually improved.

Rapid removal should be required after a notice from the eSafety Commissioner, perhaps removal within an hour. Fast removal, for example within 24 hours, should be required when reports come from the public.

The exact time lines that are possible should be the subject of consultation with both industry and civil society. They need to be achievable, not merely aspirational.

Working together, government, industry and civil society can create systems to monitor and continually improve efforts to tackle online hate and extremism.

That includes the most serious content such as abhorrent violence and incitement to violent extremism.

Trust, consultation and goodwill are needed to keep people safe.The Conversation

Andre Oboler, Senior Lecturer, Master of Cyber-Security Program (Law), La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Livestreaming terror is abhorrent – but is more rushed legislation the answer?



File 20190401 177178 1cjkc2w.jpg?ixlib=rb 1.1
The perpetrator of the Christchurch attacks livestreamed his killings on Facebook.
Shutterstock

Robert Merkel, Monash University

In the wake of the Christchurch attack, the Australian government has announced its intention to create new criminal offences relating to the livestreaming of violence on social media platforms.

The Criminal Code Amendment (Sharing of Abhorrent Violent Material) Bill will create two new crimes:

It will be a criminal offence for social media platforms not to remove abhorrent violent material expeditiously. This will be punishable by 3 years’ imprisonment or fines that can reach up to 10% of the platform’s annual turnover.

Platforms anywhere in the world must notify the Australian Federal Police if they become aware their service is streaming abhorrent violent conduct that is happening in Australia. A failure to do this will be punishable by fines of up to A$168,000 for an individual or A$840,000 for a corporation.

The government is reportedly seeking to pass the legislation in the current sitting week of Parliament. This could be the last of the current parliament before an election is called. Labor, or some group of crossbenchers, will need to vote with the government if the legislation is to pass. But the draft bill was only made available to the Labor Party last night.

This is not the first time that legislation relating to the intersection of technology and law enforcement has been raced through parliament to the consternation of parts of the technology industry, and other groups. Ongoing concerns around the Access and Assistance bill demonstrate the risks of such rushed legislation.




Read more:
China bans streaming video as it struggles to keep up with live content


Major social networks already moderate violence

The government has defined “abhorrent violent material” as:

[…] material produced by a perpetrator, and which plays or livestreams the very worst types of offences. It will capture the playing or streaming of terrorism, murder, attempted murder, torture, rape and kidnapping on social media.

The major social media platforms already devote considerable resources to content moderation. They are often criticised for their moderation policies, and the inconsistent application of those policies. But content fitting the government’s definition is already clearly prohibited by Twitter, Facebook, and Snapchat.

Social media companies rely on a combination of technology, and thousands of people employed as content moderators to remove graphic content. Moderators (usually contractors, often on low wages) are routinely called on to remove a torrent of abhorrent material, including footage of murders and other violent crimes.




Read more:
We need to talk about the mental health of content moderators


Technology is helpful, but not a solution

Technologies developed to assist with content moderation are less advanced than one might hope – particularly for videos. Facebook’s own moderation tools are mostly proprietary. But we can get an idea of the state of the commercial art from Microsoft’s Content Moderator API.

The Content Moderator API is an online service designed to be integrated by programmers into consumer-facing communication systems. Microsoft’s tools can automatically recognise “racy or adult content”. They can also identify images similar to ones in a list. This kind of technology is used by Facebook, in cooperation with the office of the eSafety Comissioner, to help track and block image-based abuse – commonly but erroneously described as “revenge porn”.

The Content Moderator API cannot automatically classify an image, let alone a video, as “abhorrent violent content”. Nor can it automatically identify videos similar to another video.

Technology that could match videos is under development. For example, Microsoft is currently trialling a matching system specifically for video-based child exploitation material.

As well as developing new technologies themselves, the tech giants are enthusiastic adopters of methods and ideas devised by academic researchers. But they are some distance from being able to automatically identify re-uploads of videos that violate their terms of service, particularly when uploaders modify the video to evade moderators. The ability to automatically flag these videos as they are uploaded or streamed is even more challenging.

Important questions, few answers so far

Evaluating the government’s proposed legislative amendments is difficult given that details are scant. I’m a technologist, not a legal academic, but the scope and application of the legislation is currently unclear. Before any legislation is passed, a number of questions need to be addressed – too many to list here, but for instance:

Does the requirement to remove “abhorrent violent material” apply only to material created or uploaded by Australians? Does it only apply to events occurring within Australia? Or could foreign social media companies be liable for massive fines if videos created in a foreign country, and uploaded by a foreigner, were viewed within Australia?

Would attempts to render such material inaccessible from within Australia suffice (even though workarounds are easy)? Or would removal from access anywhere in the world be required? Would Australians be comfortable with a foreign law that required Australian websites to delete content displayed to Australians based on the decisions of a foreign government?




Read more:
Anxieties over livestreams can help us design better Facebook and YouTube content moderation


Complex legislation needs time

The proposed legislation does nothing to address the broader issues surrounding promotion of the violent white supremacist ideology that apparently motivated the Christchurch attacker. While that does not necessarily mean it’s a bad idea, it would seem very far from a full governmental response to the monstrous crime an Australian citizen allegedly committed.

It may well be that the scope and definitional issues are dealt with appropriately in the text of the legislation. But considering the government seems set on passing the bill in the next few days, it’s unlikely lawmakers will have the time to carefully consider the complexities involved.

While the desire to prevent further circulation of perpetrator-generated footage of terrorist attacks is noble, taking effective action is not straightforward. Yet again, the federal government’s inclination seems to be to legislate first and discuss later.The Conversation

Robert Merkel, Lecturer in Software Engineering, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.