Apart from the obvious health and economic impacts, the coronavirus also presents a major opportunity for cybercriminals.
As staff across sectors and university students shift to working and studying from home, large organisations are at increased risk of being targeted. With defences down, companies should go the extra mile to protect their business networks and employees at such a precarious time.
Reports suggest hackers are already exploiting remote workers, luring them into online scams masquerading as important information related to the pandemic.
On Friday, the Australian Competition and Consumer Commission’s Scamwatch reported that since January 1 it had received 94 reports of coronavirus-related scams, and this figure could rise.
As COVID-19 causes a spike in telework, teleheath and online education, cybercriminals have fewer hurdles to jump in gaining access to networks.
The National Broadband Network’s infrastructure has afforded many Australians access to higher-speed internet, compared with DSL connections. Unfortunately this also gives cybercriminals high-speed access to Australian homes, letting them rapidly extract personal and financial details from victims.
The shift to working from home means many people are using home computers, instead of more secure corporate-supplied devices. This provides criminals relatively easy access to corporate documents, trade secrets and financial information.
Instead of attacking a corporation’s network, which would likely be secured with advanced cybersecurity countermeasures and tracking, they now simply have to locate and attack the employee’s home network. This means less chance of discovery.
Cryptolocker-based attacks are an advanced cyberattack that can bypass many traditional countermeasures, including antivirus software. This is because they’re designed and built by advanced cybercriminals.
Most infections from a cryptolocker virus happen when people open unknown attachments, sent in malicious emails.
In some cases, the attack can be traced to nation state actors. One example is the infamous WannaCry cyberattack, which deployed malware (software designed to cause harm) that encrypted computers in more than 150 countries. The hackers, supposedly from North Korea, demanded cryptocurrency in exchange for unlocking them.
If an employee working from home accidentally activates cryptolocker malware while browsing the internet or reading an email, this could first take out the home network, then spread to the corporate network, and to other attached home networks.
This can happen if their device is connected to the workplace network via a Virtual Private Network (VPN). This makes the home device an extension of the corporate network, and the virus can bypass any advanced barriers the corporate network may have.
If devices are attached to a network that has been infected and not completely cleaned, the contaminant can rapidly spread again and again. In fact, a single device that isn’t cleaned properly can cause millions of dollars in damage. This happened during the 2016 Petya and NotPetya malware attack.
On the bright side, there are some steps organisations and employees can take to protect their digital assets from opportunistic criminal activity.
Encryption is a key weapon in this fight. This security method protects files and network communications by methodically “scrambling” the contents using an algorithm. The receiving party is given a key to unscramble, or “decrypt”, the information.
Enabling encryption on a Windows or Apple device is also simple. And don’t forget to backup your encryption keys when prompted onto a USB drive, and store them in a safe place such as a locked cabinet, or off site.
A VPN should be used at all times when connected to WiFi, even at home. This tool helps mask your online activity and location, by routing outgoing and incoming data through a secure “virtual tunnel” between your computer and the VPN server.
Existing WiFi access protocols (WEP, WPA, WPA2) are insecure when being used to transmit sensitive data. Without a VPN, cybercriminals can more easily intercept and retrieve data.
It’s also important that businesses and organisations encourage remote employees to use the best malware and antiviral protections on their home systems, even if this comes at the organisation’s expense.
People often backup their files on a home computer, personal phone or tablet. There is significant risk in doing this with corporate documents and sensitive digital files.
When working from home, sensitive material can be stored in a location unknown to the organisation. This could be a cloud location (such as iCloud, Google Cloud, or Dropbox), or via backup software the user owns or uses. Files stored in these locations may not protected under Australian laws.
Businesses choosing to save files on the cloud, on an external hard drive or on a home computer need to identify backup regimes that fit the risk profile of their business. Essentially, if you don’t allow files to be saved on a computer’s hard drive at work, and use the cloud exclusively, the same level of protection should apply when working from home.
Appropriate backups must observed by all remote workers, along with standard cybersecurity measures such as firewall, encryption, VPN and antivirus software. Only then can we rely on some level of protection at a time when cybercriminals are desperate to profit.
This week NBN Co announced pricing changes for the National Broadband Network.
It includes a new plan boasting a download speed of 1 gigabit per second and an upload speed of 50 megabits per second for $80 a month.
These are 20-fold improvements on the maximum NBN speeds now. Almost a decade since the first customers were connected, NBN Co is thinking about a genuinely 21st century offering in terms of speed and price.
The NBN is late, over budget and slow. Australia places 58th globally for fixed-line broadband speed. Not only do the NBN’s advertised speeds lag international standards but the actual speeds often don’t come close to what is promised.
Customer interest as a result has been unenthusiastic. NBN Co may well need to take a massive write-down on its assets because they don’t look like they’re worth A$50 billion.
All of this was entirely predictable, based on politicians failing to remember three basic lessons from Economics 101.
The history of innovation is littered with examples of remarkably important things being invented with no clear purpose in mind, or by accident, and then exceeding our wildest expectations.
Penicillin and vulcanised rubber (which led to the tyre for automobiles) were both invented by accident. The world wide web was developed as a means of communication among particle physicists. Most of us carry around in our pocket a computer (mobile phone) roughly as powerful than the world’s faster supercomputer circa 1985. Those have turned out to be pretty useful.
When the Coalition decided to scuttle Labor’s NBN plan for fibre-optic cable to every premises, on the basis that “fibre-to-the-node” and using existing copper telephone wires to the premises would be much cheaper, this is what the chief spruiker of the Coalition’s NBN plan, Malcolm Turnbull, said about broadband needs in 2010:
There isn’t much or anything you can do with 100 Mbps that you can’t do with 12 Mbps for residential customers.
The breathtaking lack of insight and imagination in this comment is responsible in no small part for the Flintstonian broadband infrastructure Australia now has.
Prioritising speed of roll-out (which hasn’t even happened) over speed of internet (which sure has happened) was a massive mistake.
You having fast internet is good for me when we connect. When consumers can connect quickly to a business’s website that’s good for the business. It makes it more profitable for businesses to invest in their internet operations. This has benefits for other consumers and even other businesses.
A great illustration of this is in Dunedin, New Zealand, where there have been all sorts of business-to-business spillovers from the city having the fastest internet speeds in Australasia. The ABC’s Four Corners program has highlighted how this has revolutionised New Zealand’s video-game development industry, among other things.
Economists call spillover effects to third parties externalities. Pollution is a negative externality, while the benefit of fast internet is a positive externality.
A sound business model for the NBN ought to recognise the positive externalities and ensure they are incorporated into the price mechanism, by offering a partial subsidy to encourage people to sign up. Like the reverse of a carbon price.
One of the NBN’s key problems is the way successive governments structured national investment in it. Setting up NBN Co as a quasi-corporate entity needing to make a commercial rate of return on the roughly A$50 billion investment in the network was a huge mistake. It was the opposite of providing a subsidy.
The telecommunications companies who retail the NBN have complained that NBN Co’s wholesale price points mean it is hard for resellers to make a profit. It’s a kind of quality death spiral: an unattractive product means fewer people buy it, leading to the product getting worse, leading to even fewer people buying it.
Finally, it’s never a good idea to charge everyone the same price when there are different costs to serve different people.
The idea was that higher returns from easy-to-service city homes would subsidise the higher costs of service homes in regional and remote areas. But city homes, precisely because they are cheaper to service, have other options. If not enough city customers signed up to the NBN, prices would be driven up, making the network even less attractive to city customers. It’s textbook adverse selection, just like in health-insurance markets.
The government tried to get around this by banning competition. But that’s never really possible, especially from technologies not yet invented. Like 5G. The 2010 business case assumed no more than 16% of households would go wireless. Oops.
As economic journalist Peter Martin wrote in 2011:
NBN will never make a return on the cost of its capital or meet its customer targets if it faces competition. Its corporate plan says so, at point 1: “The plan assumes effective regulatory protection to prevent opportunistic cherry picking […] the viability of the project is dependent upon this protection.”
Multiple governments have bungled the NBN. But there is a way to salvage things – a bit.
Holding constant the technology (fibre-to-the-node), the best thing the government could do is write down its investment massively – ideally so low that it can flog NBN Co off to someone who can be subject to access regulation – ensuring, like other utilities, ownership of infrastructure doesn’t stymie competition – and make a modest rate of return.
Our super funds are always sticking up their hands for infrastructure investment. This would be a good one.
Ideally, though, the technology should be fixed. Fibre-to-the-premises was always going to be expensive, but it was also going to be fast, and as future-proof as we could get.
Lack of imagination and inability to think past 12 Mbps less than ten years ago should not hold the nation back now.
This article is part of our occasional long read series Zoom Out, where authors explore key ideas in science and technology in the broader context of society and humanity.
When Australia joined the global internet on June 23, 1989 – via a connection made by the University of Melbourne – it was mostly used by computer scientists.
Three decades later, more than 86% of Australian households are connected to the internet.
But it was a slow start. At first, network capacity was limited to very small volumes of information.
This all changed thanks to the development of vastly more powerful computers, and other technologies that have transformed our online experience.
One of those technologies is probably in front of you now: the screen.
Look at how you view the web, email and apps today: not just on large desktop screens but also handheld devices, and perhaps even an internet-connected wristwatch.
This was barely imaginable 30 years ago.
Australia too had networks during the 1980s, but distance and a lack of interest from commercial providers meant these were isolated from the rest of the world.
This first international link provided just 56 kilobits of national connectivity. A 20th of a megabit for the whole country! That is not even enough to play a single piece of music from a streaming service (encoded at 128kbs), and it would take a week for a movie to be transferred to Australia.
But at that time digital music, video and images were not distributed online. Nor was the internet servicing a large community. Most of the users were academics or researchers in computer science or physics.
With continuous connection came live access. The most immediate impact was that email could now be delivered immediately.
At first, email and internet news groups (discussion forums) were the main traffic, but the connection also gave access to information sharing services such as Archie (an old example here) and WAIS, which were mostly used to share software.
There was connection too, in principle at least, to the newly created world wide web, which in June 1989 was just three months old and largely unknown. It wouldn’t become significant for another four years or so.
This turning-on of a connection was not a “light in a darkened room” moment, in which we suddenly had access to the resources that are now so familiar to us.
But it was a crucial step, one of several developments maturing in parallel that created the technology that has so drastically transformed our society, commerce and daily lives. Within just a few years we were surfing the web and sending email from home.
The first of these developments was the internet itself, which was and is a cobbling-together of disparate networks around the globe.
Australia had several networks, ranging from the relatively open ACSNET (now called AARNET) created by computer science departments to connect universities to, at the other extreme, proprietary, secure networks operated by defence and industry.
When Melbourne opened that first link, it provided a bridge from ACSNET to the networks in the United States and from there to the rest of the world.
Just as important were developments in the underlying technology. At the time, the capacity of the networks was adequate – just. As the community of users rapidly grew, it sometimes seemed as though the internet might utterly break down.
By the mid-1990s bandwidth (the volume of digital traffic that a network can carry) increased to an extent that earlier had seemed unimaginable. This provided the data transmission infrastructure the web would come to demand.
Another development was computing hardware. Computers were doubling in speed every 18 months, as had been predicted. They also became much cheaper.
Computer disks were also growing in capacity, doubling in size every year or so. The yet-to-appear web would require disk space for storage of web pages, and compute capacity for running servers, which are applications that provide a door into a computer, giving users remote access to data and software.
In the 1980s these had been scarce, expensive resources that would have been overwhelmed by even small volumes of web traffic. By the early 1990s growth in capacity could – just – accommodate the demand that suddenly appeared and homes were being connected, via dial-up at first.
But it is a third concurrent development that is, to me, the most remarkable.
This is the emergence of the UNIX operating system and of a community of people who collaboratively wrote UNIX-based code for free (yes, for no charge). Their work provided what is arguably the core of the systems that underpin the modern world.
At that time, operating systems (like iOS on today’s Apple phones) were limited to a single type of computer. Code and programs could not be used across machines from different manufacturers.
UNIX, in contrast, could be used on any suitable machine. This is the reason UNIX variants continue to provide the core of Apple Mac computers, Android phones, systems such as inflight entertainment and smart TVs, and many billions of other devices.
Along with UNIX came a culture of collaborative code development by programmers. This was initially via sharing of programs sent on tape between institutions as parcels in the mail. Anyone with time to spare could create programs and share them with a community of like-minded users.
This became known as the open source movement. Many thousands of people helped develop software of a diversity and richness that was beyond the resources of any single organisation. And it was not driven by commercial or corporate needs.
Programs could embody speculative innovations, and any developer who was frustrated by errors or shortcomings in the tools they used could update or correct them.
A key piece of open source software was the server, a computer system in a network shared by multiple users. Providing anonymous users with remote access was far from desirable for commercial computers of the era, on which use of costly computing time was tightly controlled.
But in an academic, sharing, open environment such servers were a valuable tool, at least for computer scientists, who were the main users of university computers in that era.
Another key piece of open source software was the router, which allowed computers on a network to collaborate in directing network requests and responses between connected machines anywhere on the planet.
Servers had been used for email since the beginnings of the internet and initially it was email, delivered with the help of routers, that brought networked desktop computing into homes and businesses.
When the web was proposed, extending these servers to allow the information from web page servers to be sent to a user’s computer was a small step.
The last component is so ubiquitous that we forget what is literally before our eyes: the screen.
Affordable computer displays in the 1980s were much too limited to pleasingly render a web page, with resolutions of 640×480 pixels or lower, with crude colours or just black and white. Better screens, starting at 1024×768, first became widely available in the early 1990s.
Only with the appearance of the Mosaic browser in 1993 did the web become appealing, with a pool of about 100 web sites showing how to deliver information in a way that for most users was new and remarkably compelling.
The online world continues to grow and develop with access today via cable, wireless and mobile handsets. We have internet-connected services in our homes, cars, health services, government, and much more. We live-stream our music and video, and share our lives online.
But the origin of that trend of increasing digitisation of our society lies in those simple beginnings – and the end is not yet in sight.
This article was amended at the request of the author to correct the amount of data accessible from the initial link.
Kogan Australia has grown from a garage to an online retail giant in a little more than a decade. Key to its success have been its discount prices.
But apparently not all of those discounts have been legit, according to the Australian Competition and Consumer Commission.
The consumer watchdog has accused the home electronics and appliances retailer of misleading customers, by touting discounts on more than 600 items whose prices it had sneakily raised by at least the same percentage.
Kogan is yet to have its day in court, so we won’t dwell on its case specifically.
But the scenario does raise an interesting question. How effective are these types of price manipulation? After all, checking and comparing prices is dead easy online. So what could a retailer possibly gain?
Well, as it happens, potentially quite a lot.
Because consumers are human beings, our actions aren’t necessarily rational. We have strong emotional reactions to price signals. The sheer ubiquity of discounts demonstrate they must work.
Lets review a couple of findings from behavioural (and traditional) economics that help explain why discounting – both real and fake – is such an effective marketing ploy.
In standard economics, consumers are assumed to base their purchasing decisions on absolute prices. They make “rational” decisions, and the “framing” of the price does not matter.
Psychologists Daniel Kahneman and Amos Tversky challenged this assumption with their insights into consumer behaviour. Their best-known contribution to behavioural economics is “prospect theory” – a psychologically more realistic alternative to the classical theory of rational choice.
Kahneman and Tversky argued that behaviour is based on changes, which were relative. Framing a price as involving a discount therefore influences our perception of its value.
The prospect of buying something leads us to compare two different changes: the positive change in perceived value from taking ownership of a good (the gain); and the negative change experienced from handing over money (the loss). We buy if we perceive the gain to outweigh the loss.
Suppose you are looking to buy a toaster. You see one for $99. Another is $110, with a 10% discount – making it $99. Which one would you choose?
Evaluating the first toaster’s value to you is reasonably straightforward. You will consider the item’s attributes against other toasters and how much you like toast versus some other benefit you might attain for $99.
Standard economics says your emotional response involves weighing the loss of $99 against the gain of owning the toaster.
For the second toaster you might do all the same calculations about features and value for money. But behavioural economics tells us the discount will provoke a more complex emotional reaction than the first toaster.
Research shows most of us will tend to “segregate” the price from the discount; we will feel separately the emotion from the loss of spending $99 and the gain of “saving” $11.
Economist Richard Thaler demonstrated this in a study involving 87 undergraduate students at Cornell University. He quizzed them on a series of scenarios like the following:
Mr A’s car was damaged in a parking lot. He had to spend $200 to repair the damage. The same day the car was damaged, he won $25 in the office football pool.
Mr B’s car was damaged in a parking lot. He had to spend $175 to repair the damage.
Who was more upset?
Just five students said both would be equally upset, while 63 (more than 72%) said Mr B. Similar hypotheticals elicited equally emphatic results.
Economists now refer to this as the “silver lining effect” – segregating a small gain from a larger loss results in greater psychological value than integrating the gain into a smaller loss.
The result is we feel better handing over money for a discounted item than the same amount for a non-discounted item.
Another behavioural trick associated with discounts is creating a sense of urgency, by emphasising the discount period will end soon.
Again, the fact people typically evaluate prospects as changes from a reference point comes into play.
The seller’s strategy is to shift our reference points so we compare the current price with a higher price in the future. This makes not buying feel like a future loss. Since most humans are loss-averse, we may be nudged to avoid that loss by buying before the discount expires.
Expiry warnings also work through a second behavioural channel: anticipated regret.
Some of us are greatly influenced to behave according to whether we think we will regret it in the future.
Economic psychologist Marcel Zeelenberg and colleagues demonstrated this in experiments with students at the University of Amsterdam. Their conclusion: regret-aversion better explains choices than risk-aversion, because anticipation of regret can promote both risk-averse and risk-seeking choices.
Depending to what extent we have this trait, an expiry warning can compel us to buy now, in case we need that item in the future and will regret not having taken the opportunity to buy it when discounted.
Discounting is thus an effective strategy to get us to buy products we actually don’t need.
But what about the fact that it is so easy to compare prices online? Why doesn’t this fact nullify the two effects we’ve just discussed?
Here the standard economics of consumer search would agree that consumers might be misled despite being perfectly rational.
If a consumer judges a discount promotion is genuine, they have a tendency to assume it is less likely they will find a lower price elsewhere. This belief makes them less likely to continue searching.
In experiments on this topic, my colleague Changxia Ke and I have found a discernible “discount bias”. The effect is not necessarily large, depending on circumstances, but even a small nudge towards choosing a retailer with discounted items over another could end up being worth millions.
Once a consumer has made a decision and bought an item, they are even less likely to search for prices. They therefore may never learn a discount was fake.
There are entire industries where it is general practice to frame prices this way. Paradoxically, because this makes consumers search less for better deals, it allows all sellers to charge higher prices.
The bottom line: beware the emotional appeal of the discount. Whether real or fake, the human tendency is to overrate them.
With natural hazard and climate-related disasters on the rise, online tools such as crowdsourced mapping and social media can help people understand and respond to a crisis. They enable people to share their location and contribute information.
But are these tools useful for everyone, or are some people marginalised? It is vital these tools include information provided from all sections of a community at risk.
Current evidence suggests that is not always the case.
Social media played an important role in coordinating response to the 2019 Queensland floods and the 2013 Tasmania bushfires. Community members used Facebook to coordinate sharing of resources such as food and water.
Crowdsourced mapping helped in response to the humanitarian crisis after the 2010 Haiti earthquake. Some of the most useful information came from public contributions.
Twitter provided similar critical insights during Hurricane Irma in South Florida in 2017.
In the rush to develop new disaster mitigation tools, it is important to consider whether they will help or harm the people most vulnerable in a disaster.
Extreme natural events, such as earthquakes and bushfires, are not considered disasters until vulnerable people are exposed to the hazard.
To determine people’s level of vulnerability we need to know:
Some groups in society will be more vulnerable to disaster than others. This includes people with immobility issues, caring roles, or limited access to resources such as money, information or support networks.
When disaster strikes, the pressure on some groups is often magnified.
The devastating scenes in New Orleans after Hurricane Katrina in 2005 and in Puerto Rico after Hurricane Maria in 2017 revealed the vulnerability of children in such disasters.
Unfortunately, emergency management can exacerbate the vulnerability of marginalised groups. For example, a US study last year showed that in the years after disasters, wealth increased for white people and declined for people of colour. The authors suggest this is linked to inequitable distribution of emergency and redevelopment aid.
We need to ask: do new forms of disaster response help everyone in a community, or do they reproduce existing power imbalances?
These technologies inherently discriminate if access to them discriminates.
Lower digital inclusion is seen in already vulnerable groups, including the unemployed, migrants and the elderly.
Global internet penetration rates show uneven access between economically poorer parts of the world, such as Africa and Asia, and wealthier Western regions.
Representations of communities are skewed on the internet. Particular groups participate with varying degrees on social media and in crowdsourcing activities. For example, some ethnic minorities have poorer internet access than other groups even in the same country.
Research shows participation biases in community mapping activities towards older, more affluent men.
Persecuted minorities, including LGBTIQ communities and religious minorities, are often more vulnerable in disasters. Digital technologies, which expose people’s identities and fail to protect privacy, might increase that vulnerability.
Unequal participation means those who can participate may become further empowered, with more access to information and resources. As a result, gaps between privileged and marginalised people grow wider.
For example, local Kreyòl-speaking Haitians from poorer neighbourhoods contributed information via SMS for use on crowdsourced maps during the 2010 Haiti earthquake response.
But the information was translated and mapped in English for Western humanitarians. As they didn’t speak English, vulnerable Haitians were further marginalised by being unable to directly use and benefit from maps resulting from their own contributions.
Any power imbalances that come from unequal online participation are pertinent to disaster risk reduction. They can amplify community tensions, social divides and marginalisation, and exacerbate vulnerability and risk.
With greater access to the benefits of online tools, and improved representation of diverse and marginalised people, we can better understand societies and reduce disaster impacts.
We must remain acutely aware of digital divides and participation biases. We must continually consider how these technologies can better include, value and elevate marginalised groups.
Billy Tusker Haworth, Lecturer in GIS and Disaster Management, University of Manchester; Christine Eriksen, Senior Lecturer in Geography and Sustainable Communities, University of Wollongong, and Scott McKinnon, Vice-Chancellor’s Postdoctoral Research Fellow, University of Wollongong
The United States and Australia are deliberately restricting the place of Chinese telco Huawei in their telecommunications landscapes.
We’re told these changes will be worth it from a security point of view.
But Huawei infrastructure is already ubiquitous in telecommunications networks, and we have other avenues available to us if we’re concerned about cybersecurity.
In the end, halting involvement of Huawei in Australia will be felt directly by customers. We will have to be satisfied with below-par 5G internet speeds and delayed service rollouts.
And we probably won’t be able to use Google Play on Huawei smart phones after 2020.
5G is a mobile phone network that promises top speeds, especially in highly populated areas. Australia has been expecting the network to be broadly up and running by around 2020 – there is limited availability in some central business districts right now.
Top 5G speeds can reach up to 10 gigabits per second, 20 times faster than 4G. This means movie downloads in a matter of seconds – as opposed to minutes with 4G. A mobile phone, gaming laptop or smart TV can communicate with a 5G network at a response speed of 1 millisecond, as opposed to 30 milliseconds with 4G.
Huawei, the world’s biggest manufacturer of telecommunications equipment, is leading the 5G race. The Chinese company is around 12 months ahead of its competitors Nokia and Ericsson.
Huawei has been involved in providing 3G and 4G services in Australia since 2004 – reportedly working with Vodafone and Optus, but not Telstra or NBN Co. Huawei built a private 4G network for mining company Santos, and digital voice and data communication systems for rail services in Western Australia and New South Wales. This includes radio masts, base stations and handheld radios, but not the core network.
This stems from apparent Australian and US government concerns that Huawei infrastructure could allow the Chinese government to collect foreign intelligence and sensitive information, and sabotage economic interests.
Australia’s telecommunications networks have already felt the impact of the Coalition’s Telecommunications Sector Security Reforms announced in August 2018.
These reforms “place obligations on telecommunications companies to protect Australian networks from unauthorised interference or access that might prejudice our national security”.
The guidance effectively put the companies on notice, implying that use of Huawei could violate cybersecurity laws. No company wants to be in such a position. Continuing with Huawei after being informed that the company may pose a national security risk could bring legal and reputational risks.
The result is companies such as Optus and Vodafone were left scrambling to re-negotiate 5G testing and rollout plans that had been in the works since 2016. Optus has already delayed its 5G roll out.
Most operators do use additional manufacturers such as Nokia and Ericsson for networks and testing. But it’s already clear from cases in Europe that such companies have been slow to release equipment that is as advanced as Huawei’s.
Costs incurred by such changes and the delays in rolling out high-quality services are absorbed by mobile phone companies in the first instance, and eventually passed on to the consumer.
Given existing frustrations with the NBN, customers will continue to wait longer and may have to pay more for top 5G services.
Customers who prefer to use Huawei-made phones could be hit with a double whammy. Recent actions by Google to suspend business operations with Huawei could prevent these customers from having access to Google Play (the equivalent of Apple’s app store on Android devices) in the future.
But it’s doubtful Huawei has assisted such efforts. Technical flaws detected in Italy are reported to be normal in the sector and not due to a backdoor.
Germany has decided to introduce a broad regulatory regime that requires suppliers of 5G networks to be trustworthy, and provide assured protection of information under local laws.
A similar approach in Australia would require telecommunications equipment to be tested before installation, and at regular intervals after installation for the lifetime of the network, under a security capability plan the supplier is required to submit.
More broadly speaking, the Coalition has pledged A$156 million to cybersecurity, aimed at developing skills to defend against cyber intrusions and to improve the capabilities of the Australian Cyber Security Centre (ACSC). These plans could reasonably be timed with the expected launch of 5G at the end of 2020.
Added to this, the 2018 Assistance and Access Act – commonly referred to as the Encryption Bill – already requires all telecommunications manufacturers to protect their networks and assist national security and law enforcement agencies to share information. Huawei is subject to this legal obligation.
If there are security fears about 5G, those same fears would exist in respect of 4G that has been installed and is supported by Huawei in this country for more than a decade.
It’s not clear what we gain by blocking Huawei’s involvement in Australia’s 5G network.
Whether or not it’s some sort of record, the Liberals’ loss of two Victorian candidates in a single day is way beyond what Oscar Wilde would have dubbed carelessness.
Already struggling in that state, the Victorian Liberals managed to select one candidate who, judged on his words, was an appalling Islamophobe and another who was an out-and-out homophobe.
The comments that have brought them down weren’t made in the distant past – they date from last year.
Jeremy Hearn was disendorsed as the party’s candidate for the Labor seat of Isaacs, after it came to light that he had written, among other things, that a Muslim was someone who subscribed to an ideology requiring “killing or enslavement of the citizens of Australia if they do not become Muslim”. This was posted in February 2018.
Peter Killin, who was standing in Wills, withdrew over a comment (in reply to another commenter) he posted in December that included suggesting Liberal MP Tim Wilson should not have been preselected because he’s gay.
Scott Morrison rather quaintly explained the unfortunate choice of Killin by saying “he was a very recent candidate who came in because we weren’t able to continue with the other candidate because of section 44 issues”.
Oops and oops again. First the Victorian Liberals pick someone who didn’t qualify legally and then they replaced that candidate with one who didn’t qualify under any reasonable test of community standards.
It’s not just the Liberals with problems of candidates with unacceptable views, or bad behaviour.
Labor’s Northern Territory number 2 Senate candidate Wayne Kurnoth, who shared anti-Semitic material on social media, recently stood down. Bill Shorten embarrassed himself on Wednesday by saying he hadn’t met the man, despite having been filmed with him.
Then there’s the case of Luke Creasey, the Labor candidate running in Melbourne, which is held by Greens Adam Bandt, who shared rape jokes and pornographic material on social media. He has done a mea culpa, saying his actions happened “a number of years ago” and “in no way reflect the views I hold today”. Creasey still has his endorsement. Labor Senate leader Penny Wong has defended him, including by distinguishing between a “mistake” and “prejudice”.
It should be remembered that, given the post-nomination timing, these latest candidates unloaded by their parties have not lost their spots or their party designations on the ballot paper.
As Antony Green wrote when a NSW Liberal candidate had to withdraw during the state election (after a previous association with an online forum which reportedly engaged in unsavoury jokes) “the election goes ahead as if nothing had happened”.
It won’t occur this time, but recall the Pauline Hanson experience. In 1996, the Liberals disendorsed Hanson for racist remarks but she remained on the ballot paper with the party moniker. She was duly elected – and no doubt quite a few voters had thought she was the official Liberal candidate.
What goes around comes around – sort of.
This week Hanson’s number 2 Queensland Senate candidate, Steve Dickson, quit all his party positions after footage emerged of his groping and denigrating language at a Washington strip club. But Dickson is still on the Senate ballot paper.
While the latest major party candidates have been dumped for their views, this election has produced a large number of candidates who clearly appear to be legally ineligible to sit in parliament.
Their presence is despite the fact that, after the horrors of the constitution’s section 44 during the last parliament, candidates now have to provide extensive details for the Australian Electoral Commission about their eligibility.
Although the AEC does not have any role of enforcing eligibility, the availability of this data makes it easier in many cases to spot candidates who have legal question marks.
Most of the legally-dubious candidates have come from minor parties, and these parties, especially One Nation, Palmer’s United Australia Party and Fraser Anning’s Conservative National Party are getting close media attention.
When the major parties discovered prospective candidates who would hit a section 44 hurdle – and there have been several – they quickly replaced them.
But the minor parties don’t seem too worried about eligibility. While most of these people wouldn’t have a hope in hell of being elected, on one legal view there is a danger of a High Court challenge if someone was elected on the preferences of an ineligible candidate.
The section 44 problems reinforce the need to properly fix the constitution, as I have argued before. It will be a miracle if it doesn’t cause issues in the next parliament, because in more obscure cases a problem may not be easy to spot.
But what of those with beyond-the-pale views?
At one level the fate of the two Victorian Liberal candidates carries the obvious lesson for aspirants: be careful what you post on social media, and delete old posts.
That’s the expedient point. These candidates were caught out by what they put, and left, online.
But there is a deeper issue. Surely vetting of candidates standing for major parties must properly require a very thorough examination of their views and character.
Admittedly sometimes decisions will not be easy – judgements have to be made, including a certain allowance in the case of things said or done a long time before (not applicable with the two in question).
But whether it is the factional nature of the Victorian division to blame for allowing these candidates to get through, or the inattention of the party’s powers-that-be (or likely a combination of both) it’s obvious that something went badly wrong.
That they were in unwinnable seats (despite Isaacs being on a small margin) should be irrelevant. All those who carry the Liberal banner should be espousing values in line with their party, which does after all claim to put “values” at the heart of its philosophy. The same, of course, goes for Labor.
This week saw the closure of Google+, an attempt by the online giant to create a social media community to rival Facebook.
If the Australian usage of Google+ is anything to go by – just 45,000 users in March compared to Facebook’s 15 million – it never really caught on.
But the Google+ shutdown follows a string of organisations that have disabled or restricted community features such as reviews, user comments and message boards (forums).
So are we witnessing the decline of online communities and user comments?
One of the most well-known message boards – which existed on the popular movie website IMDb since 2001 – was shut down by owner Amazon in 2017 with just two weeks’ notice for its users.
This is not only confined to online communities but mirrors a trend among organisations to restrict or turn off their user-generated content. Last year the subscription video-on-demand website Netflix said it no longer allowed users to write reviews. It subsequently deleted all existing user-generated reviews.
Organisations have a range of motivations for taking such actions, ranging from low uptake, running costs, the challenges of managing moderation, as well as the problem around divisive comments, conflicts and lack of community cohesion.
NPR explained its motivation to remove user comments by highlighting how in one month its website NPR.org attracted 33 million unique users and 491,000 comments. But those comments came from just 19,400 commenters; the number of commenters who posted in consecutive months was a fraction of that.
We’ve reached the point where we’ve realized that there are other, better ways to achieve the same kind of community discussion around the issues we raise in our journalism.
Likewise, The Atlantic explained that its comments sections had become “unhelpful, even destructive, conversations” and was exploring new ways to give users a voice.
In the case of IMDB closing its message boards in 2017, the reason given was:
[…] we have concluded that IMDb’s message boards are no longer providing a positive, useful experience for the vast majority of our more than 250 million monthly users worldwide.
The organisation also nudged users towards other forms of social media, such as its Facebook page and Twitter account @IMDB, as the “(…) primary place they (users) choose to post comments and communicate with IMDb’s editors and one another”.
Unsurprisingly, such actions often lead to confusion, criticism and disengagement by user communities, and in some cases petitions to have the features reinstated (such as this one for Google+) and boycotts of the organisations.
But most organisations take these aspects into their decision-making.
For fans of such community features these trends point to some harsh realities. Even though communities may self-organise and thrive, and users are co-creators of value and content, the functionality and governance are typically beyond their control.
Community members are at the mercy of hosting organisations, some profit-driven, which may have conflicting motivations to those of the users. It’s those organisations that hold the power to change or shut down what can be considered by some to be critical sources of knowledge, engagement and community building.
In the aftermath of shutdowns, my research shows that communities that existed on an organisation’s message boards in particular may struggle to reform.
This can be due to a number of factors, such as high switching costs, and communities can become fragmented because of the range of other options (Reddit, Facebook and other message boards).
So it’s difficult for users to preserve and maintain their communities once their original home is disabled. In the case of Google+, even its Mass Migration Group – which aims to help people, organisations and groups find “new online homes” – may not be enough to hold its online communities together.
The trend towards the closure of online communities by organisations might represent a means to reduce their costs in light of declining usage and the availability of other online options.
It’s also a move away from dealing with the reputational issues related to their use and controlling the conversation that takes place within their user bases. Trolling, conflicts and divisive comments are common in online communities and user comments spaces.
But within online groups there often exists social and network capital, as well as the stock of valuable knowledge that such community features create.
Often these communities are made of communities of practice (people with a shared passion or concern) on topics ranging from movie theories to parenting.
They are go-to sources for users where meaningful interactions take place and bonds are created. User comments also allow people to engage with important events and debates, and can be cathartic.
Closing these spaces risks not only a loss of user community bases, but also a loss of this valuable community knowledge on a range of issues.