Healthcare, minerals, energy, food: how adopting new tech could drive Australia’s economic recovery



CSIRO, Author provided

Katherine Wynn, CSIRO; James Deverell, CSIRO; Max Temminghoff, CSIRO, and Mingji Liu, CSIRO

Over the next few years, science and technology will have a vital role in supporting Australia’s economy as it strives to recover from the coronavirus pandemic.

At Australia’s national science agency, CSIRO, we’ve identified opportunities that can help businesses drive economic recovery.

We examined how the pandemic has created or intensified opportunities for economic growth across six sectors benefiting from science and technology. These are food and agribusiness, energy, health, mineral resources, digital and manufacturing.

Advanced healthcare

While some aspects of Australian healthcare are currently digitised, system-wide digital health integration could improve the quality of care and save money.

Doctors caring for patients with chronic diseases or complex conditions could digitally coordinate care routines. This could streamline patient care by avoiding consultation double-ups and providing a more holistic view of patient health.

We also see potential for more efficient healthcare delivery through medical diagnostic tests that are more portable and non-invasive. Such tests, supported by artificial intelligence and smart data storage approaches, would allow faster disease detection and monitoring.

There’s also opportunity for developing specialised components such as 3D-printed prosthetics, dental and bone implants.

Green energy

Despite a short-term plateau in energy consumption caused by COVID-19 globally, the demand for energy will continue to grow.

Through clean energy exports and energy initiatives aligned with decarbonisation goals, Australia can help meet global energy demands. Energy-efficient technologies offer immediate reduced energy costs, reduced carbon emissions and less demand on the energy grid. They also create local jobs.




Read more:
It might sound ‘batshit insane’ but Australia could soon export sunshine to Asia via a 3,800km cable


Innovating with food and agribusiness

The food and agribusiness sector is a prominent contributor to Australia’s economy and supports regional and rural prosperity.

Global population growth is driving an increased demand for protein. At the same time, consumers want more products that are sustainable and ethically sourced.

Australia could earn revenue from the local production and export of more sustainable proteins. This might include plant-based proteins such as pea and lupins, or aquaculture products such as farmed prawns and seaweed.

We could also offer more high-value health and well-being foods. Examples include fortified foods and products free from gluten, lactose and other allergens.

Automating minerals processes

Even before COVID-19 struck, the mineral resources sector was facing rising costs and declining ore grades. It’s also dealing with climate change impacts such as droughts, bushfires, floods, and social pressures to reduce environmental harm.

Several innovative solutions could help make the sector more productive and sustainable. For instance, increasing automation and remote mining (which Australia already excels in) could achieve improved safety for workers, more productivity and business continuity.




Read more:
The coronavirus has thrust human limitations into the spotlight. Will it mark the rise of automation?


Also, investing in advanced technologies that can generate higher quality data on mineral character and composition could improve yields and minimise environmental harm.

High-tech manufacturing

COVID-19 has escalated concerns around Australia’s supply chain fragility – take the toilet paper shortages earlier in the pandemic. Expanding local manufacturing efforts could create jobs and increase Australia’s earning potential.

This is especially true for mineral processing and manufacturing, pharmaceuticals, food and beverages, space technology and defence. Our local manufacturing will need to adapt quickly to changes in supply needs, ideally through the use of advanced designs and technology.

Digital solutions

In April and May this year, Australian businesses made huge strides in adopting consumer and business digital technologies. One study estimated five years’ worth of progress occurred in those eight weeks. Hundreds of thousands of businesses moved their work online.

Over the next two years, Australian businesses could become more efficient and adaptable by further monetising the data they already collect. For example, applying mobile sensors, robotics and machine learning techniques could help us make better resource decisions in agriculture.

Similarly, businesses could share more data throughout the supply chain, including with customers and competitors. For instance, increased data sharing among renewable energy providers and customers could improve the monitoring, forecasting and reliability of energy supply.

Making the right plans and investments now will determine Australia’s recovery and resilience in the future.The Conversation

Katherine Wynn, Lead Economist, CSIRO Futures, CSIRO; James Deverell, Director, CSIRO Futures, CSIRO; Max Temminghoff, Senior Consultant, CSIRO, and Mingji Liu, Senior Economic Consultant, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can I still be hacked with 2FA enabled?



Shutterstock

David Tuffley, Griffith University

Cybersecurity is like a game of whack-a-mole. As soon as the good guys put a stop to one type of attack, another pops up.

Usernames and passwords were once good enough to keep an account secure. But before long, cybercriminals figured out how to get around this.

Often they’ll use “brute force attacks”, bombarding a user’s account with various password and login combinations in a bid to guess the correct one.

To deal with such attacks, a second layer of security was added in an approach known as two-factor authentication, or 2FA. It’s widespread now, but does 2FA also leave room for loopholes cybercriminals can exploit?

2FA via text message

There are various types of 2FA. The most common method is to be sent a single-use code as an SMS message to your phone, which you then enter following a prompt from the website or service you’re trying to access.

Most of us are familiar with this method as it’s favoured by major social media platforms. However, while it may seem safe enough, it isn’t necessarily.

Hackers have been known to trick mobile phone carriers (such as Telstra or Optus) into transferring a victim’s phone number to their own phone.




Read more:
$2.5 billion lost over a decade: ‘Nigerian princes’ lose their sheen, but scams are on the rise


Pretending to be the intended victim, the hacker contacts the carrier with a story about losing their phone, requesting a new SIM with the victim’s number to be sent to them. Any authentication code sent to that number then goes directly to the hacker, granting them access to the victim’s accounts.
This method is called SIM swapping. It’s probably the easiest of several types of scams that can circumvent 2FA.

And while carriers’ verification processes for SIM requests are improving, a competent trickster can talk their way around them.

Authenticator apps

The authenticator method is more secure than 2FA via text message. It works on a principle known as TOTP, or “time-based one-time password”.

TOTP is more secure than SMS because a code is generated on your device rather than being sent across the network, where it might be intercepted.

The authenticator method uses apps such as Google Authenticator, LastPass, 1Password, Microsoft Authenticator, Authy and Yubico.

However, while it’s safer than 2FA via SMS, there have been reports of hackers stealing authentication codes from Android smartphones. They do this by tricking the user into installing malware (software designed to cause harm) that copies and sends the codes to the hacker.

The Android operating system is easier to hack than the iPhone iOS. Apple’s iOS is proprietary, while Android is open-source, making it easier to install malware on.

2FA using details unique to you

Biometric methods are another form of 2FA. These include fingerprint login, face recognition, retinal or iris scans, and voice recognition. Biometric identification is becoming popular for its ease of use.

Most smartphones today can be unlocked by placing a finger on the scanner or letting the camera scan your face – much quicker than entering a password or passcode.

However, biometric data can be hacked, too, either from the servers where they are stored or from the software that processes the data.

One case in point is last year’s Biostar 2 data breach in which nearly 28 million biometric records were hacked. BioStar 2 is a security system that uses facial recognition and fingerprinting technology to help organisations secure access to buildings.

There can also be false negatives and false positives in biometric recognition. Dirt on the fingerprint reader or on the person’s finger can lead to false negatives. Also, faces can sometimes be similar enough to fool facial recognition systems.

Another type of 2FA comes in the form of personal security questions such as “what city did your parents meet in?” or “what was your first pet’s name?”




Read more:
Don’t be phish food! Tips to avoid sharing your personal information online


Only the most determined and resourceful hacker will be able to find answers to these questions. It’s unlikely, but still possible, especially as more of us adopt public online profiles.

Person looks at a social media post from a woman, on their mobile.
Often when we share our lives on the internet, we fail to consider what kinds of people may be watching.
Shutterstock

2FA remains best practice

Despite all of the above, the biggest vulnerability to being hacked is still the human factor. Successful hackers have a bewildering array of psychological tricks in their arsenal.

A cyber attack could come as a polite request, a scary warning, a message ostensibly from a friend or colleague, or an intriguing “clickbait” link in an email.

The best way to protect yourself from hackers is to develop a healthy amount of scepticism. If you carefully check websites and links before clicking through and also use 2FA, the chances of being hacked become vanishingly small.

The bottom line is that 2FA is effective at keeping your accounts safe. However, try to avoid the less secure SMS method when given the option.

Just as burglars in the real world focus on houses with poor security, hackers on the internet look for weaknesses.

And while any security measure can be overcome with enough effort, a hacker won’t make that investment unless they stand to gain something of greater value.The Conversation

David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Keep calm, but don’t just carry on: how to deal with China’s mass surveillance of thousands of Australians



Shutterstock

Bruce Baer Arnold, University of Canberra

National security is like sausage-making. We might enjoy the tasty product, but want to look away from the manufacturing.

Recent news that Chinese company Zhenhua Data is profiling more than 35,000 Australians isn’t a surprise to people with an interest in privacy, security and social networks. We need to think critically about this, knowing we can do something to prevent it from happening again.

Reports indicate Zhenhua provides services to the Chinese government. It may also provide services to businesses in China and overseas.

The company operates under Chinese law and doesn’t appear to have a presence in Australia. That means we can’t shut it down or penalise it for a breach of our law. Also, Beijing is unlikely to respond to expressions of outrage from Australia or condemnation by our government – especially amid recent sabre-rattling.




Read more:
Journalists have become diplomatic pawns in China’s relations with the West, setting a worrying precedent


Zhenhua is reported to have data on more than 35,000 Australians – a list saturated by political leaders and prominent figures. Names, birthdays, addresses, marital status, photographs, political associations, relatives and social media account details are among the information extracted.

It seems Zhenhua has data on a wide range of Australians, including public figures such as Victorian supreme court judge Anthony Cavanough, Australia’s former ambassador to China Geoff Raby, former NSW premier and federal foreign affairs minister Bob Carr, tech billionaire Mike Cannon-Brookes and singer Natalie Imbruglia.

It’s not clear how individuals are being targeted. The profiling might be systematic. It might instead be conducted on the basis of a specific industry, academic discipline, public prominence or perceived political influence.

It’s unlikely Zhenhua profiles random members of the public. That means there’s no reason for average citizens without a China connection to be worried.

Still, details around the intelligence gathering elude us, so best practise for the public is to maintain as much online privacy as possible, whenever possible.

Overall, we don’t know much about Zhenhua’s goals. And what we do know came from a leak to a US academic who sensibly fled China in 2018, fearing for his safety.

Pervasive surveillance is the norm

Pervasive surveillance is now a standard feature of all major governments, which often rely on surveillance-for-profit companies. Governments in the West buy services from big data analytic companies such as Palantir.

Australia’s government gathers information outside our borders, too. Take the bugging of the Timor-Leste government, a supposed friend rather than enemy.

How sophisticated is the plot?

Revelations about Zhenhua have referred to the use of artificial intelligence and the “mosaic” method of intelligence gathering. But this is probably less exciting than it sounds.

Reports indicate much of the data was extracted from online open sources. Access to much of this would have simply involved using algorithms to aggregate targets’ names, dates, qualifications and work history data found on publicly available sites.

The algorithms then help put the individual pieces of the “mosaic” together and fill in the holes on the basis of each individual’s relationship with others, such as their as peers, colleagues or partners.

Some of the data for the mosaic may come from hacking or be gathered directly by the profiler. According to the ABC, some data that landed in Zhenhua’s lap was taken from the dark web.

One seller might have spent years copying data from university networks. For example, last year the Australian National University acknowledged major personal data breaches had taken place, potentially extending back 19 years.

This year there was also the unauthorised (and avoidable) access by cybercriminals to NSW government data on 200,000 people.

While it may be confronting to know a foreign state is compiling information on Australian citizens, it should be comforting to learn sharing this information can be avoided – if you’re careful.

What’s going on in the black box?

One big question is what Zhenhua’s customers in China’s political and business spheres might do with the data they’ve compiled on Australian citizens. Frankly, we don’t know. National security is often a black box and we are unlikely ever to get verifiable details.

Apart from distaste at being profiled, we might say being watched is no big deal, especially given many of those on the list are already public figures. Simply having an AI-assisted “Who’s Who” of prominent Australians isn’t necessarily frightening.

However, it is of concern if the information collected is being used for disinformation, such as through any means intended to erode trust in political processes, or subvert elections.

For instance, a report published in June by the Australian Strategic Policy Institute detailed how Chinese-speaking people in Australia were being targeted by a “persistent, large-scale influence campaign linked to Chinese state actors”.

Illustration of surveillance camera with Chinese flag draped over.
In June, Prime Minister Scott Morrison announced China was supposedly behind a major state-based attack against several of Australia’s sectors, including all levels of government.
Shutterstock

Deep fake videos are another form of subversion of increasing concern to governments and academics, particularly in the US.




Read more:
Deepfake videos could destroy trust in society – here’s how to restore it


Can we fix this?

We can’t make Zhenhua and its competitors disappear. Governments think they are too useful.

Making everything visible to state surveillance is now the ambition of many law enforcement bodies and all intelligence agencies. It’s akin to Google and its competitors wanting to know (and sell) everything about us, without regard for privacy as a human right.

We can, however, build resilience.

One way is to require government agencies and businesses to safeguard their databases. That hasn’t been the case with the NSW government, Commonwealth governments, Facebook, dating services and major hospitals.

In Australia, we need to adopt recommendations by law reform inquiries and establish a national right to privacy. The associated privacy tort would incentivise data custodians and also encourage the public to avoid oversharing online.

In doing so, we might be better placed to condemn both China and other nations participating in unethical intelligence gathering, while properly acknowledging our own wrongdoings in Timor-Leste.The Conversation

Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Thousand Talents Plan is part of China’s long quest to become the global scientific leader


James Jin Kang, Edith Cowan University

The Thousand Talents Plan is a Chinese government program to attract scientists and engineers from overseas. Since the plan began in 2008, it has recruited thousands of researchers from countries including the United States, the United Kingdom, Germany, Singapore, Canada, Japan, France and Australia.

While many countries try to lure top international research talent, the US, Canada and others have raised concerns that the Thousand Talents Plan may facilitate espionage and theft of intellectual property.

Why are we hearing about it now?

China has long been suspected of engaging in hacking and intellectual property theft. In the early 2000s, Chinese hackers were involved in the downfall of the Canadian telecommunications corporation Nortel, which some have linked to the rise of Huawei.

These efforts have attracted greater scrutiny as Western powers grow concerned about China’s increasing global influence and foreign policy projects such as the Belt and Road Initiative.

Last year, a US Senate committee declared the plan a threat to American interests. Earlier this year, Harvard nanotechnology expert Charles Lieber was arrested for lying about his links to the program.

In Australia, foreign policy think tank the Australian Strategic Policy Institute recently published a detailed report on Australian involvement in the plan. After media coverage of the plan, the parliamentary joint committee on intelligence and security is set to launch an inquiry into foreign interference in universities.




Read more:
Why China is a leader in intellectual property (and what the US has to do with it)


What is the Thousand Talents Plan?

The Chinese Communist Party (CCP) developed the Thousand Talents Plan to lure top scientific talent, with the goal of making China the world’s leader in science and technology by 2050. The CCP uses the plan to obtain technologies and expertise, and arguably, Intellectual Properties from overseas by illegal or non-transparent means to build their power by leveraging those technologies with minimal costs.

According to a US Senate committee report, the Thousand Talents Plan is one of more than 200 CCP talent recruitment programs. These programs drew in almost 60,000 professionals between 2008 and 2016.

China’s technology development and intellectual property portfolio has skyrocketed since the launch of the plan in 2008. Last year China overtook the US for the first time in filing the most international patents.

What are the issues?

The plan offers scientists funding and support to commercialise their research, and in return the Chinese government gains access to their technologies.

In 2019, a US Senate committee declared the plan a threat to American interests. It claimed one participating researcher stole information about US military jet engines, and more broadly that China uses American research and expertise for its own economic and military gain.




Read more:
China’s quest for techno-military supremacy


Dozens of Australian and US employees of universities and government are believed to have participated in the plan without having declared their involvement. In May, ASIO issued all Australian universities a warning about Chinese government recruitment activities.

On top of intellectual property issues, there are serious human rights concerns. Technologies transferred to China under the program have been used in the oppression of Uyghurs in Xinjiang and in society-wide facial recognition and other forms of surveillance.

A global network

The Chinese government has established more than 600 recruitment stations globally. This includes 146 in the US, 57 each in Germany and Australia, and more than 40 each in the UK, Canada, Japan and France.

Recruitment agencies contracted by the CCP are paid A$30,000 annually plus incentives for each successful recruitment.

They deal with individual researchers rather than institutions as it is easier to monitor them. Participants do not have to leave their current jobs to be involved in the plan.




Read more:
China’s quantum satellite could make data breaches a thing of the past


This can raise conflicts of interest. In the US alone, 54 scientists have lost their jobs for failing to disclose this external funding, and more than 20 have been charged on espionage and fraud allegations.

In Australia, our education sector relies significantly on the export of education to Chinese students. Chinese nationals may be employed in various sectors including research institutions.

These nationals are targets for Thousand Talents Plan recruitment agents. Our government may not know what’s going on unless participants disclose information about their external employment or grants funded by the plan.

The case of Koala AI

Heng Tao Shen was recruited by the Thousand Talents Plan in 2014 while a professor at the University of Queensland. He became head of the School of Computer Science and Engineering at the University of Electronic Science and Technology of China and founded a company called Koala AI.

Members of Koala AI’s research team reportedly now include Thousand Talents Plan scholars at the University of NSW, University of Melbourne and the National University of Singapore. The plan allows participants to stay at their overseas base as long as they work in China for a few months of the year.

The company’s surveillance technology was used by authorities in Xinjiang, raising human rights issues. Shen, who relocated to China in 2017 but was as an honorary professor at the University of Queensland until September 2019, reportedly failed to disclose this information to his Australian university, going against university policy.

What should be done?

Most participants in the plan are not illegally engaged and have not breached the rules of their governments or institutions. With greater transparency and stricter adherence to the rules of foreign states and institutions, the plan could benefit both China and other nations.

Governments, universities and research institutions, and security agencies all have a role to play here.

The government can build partnership with other parties to monitor the CCP’s talent recruitment activities and increase transparency on funding in universities. Investigations of illegal behaviour related to the talent recruitment activity can be conducted by security agencies. Research institutes can tighten the integrity of grant recipients by disclosing any participation in the talent recruitment plans.




Read more:
China and AI: what the world can learn and what it should be wary of


More resources should be invested towards compliance and enforcement in foreign funding processes, so that researchers understand involvement in the Thousand Talents Plan may carry national security risks.

Following US government scrutiny in 2018, Chinese government websites deleted online references to the plan and some Chinese universities stopped promoting it. The plan’s website also removed the names of participating scientists.

This shows a joint effort can influence the CCP and their recruitment stations to be more cautious in approaching candidates, and reduce the impact of this plan on local and domestic affairs.

Correction: This article has been updated to reflect the fact that Heng Tao Shen ceased to be an honorary professor at University of Queensland in September 2019.The Conversation

James Jin Kang, Lecturer, Computing and Security, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How misinformation about 5G is spreading within our government institutions – and who’s responsible



Aris Oikonomou/EPA

Michael Jensen, University of Canberra

“Fake news” is not just a problem of misleading or false claims on fringe websites, it is increasingly filtering into the mainstream and has the potential to be deeply destructive.

My recent analysis of more than 500 public submissions to a parliamentary committee on the launch of 5G in Australia shows just how pervasive misinformation campaigns have become at the highest levels of government. A significant number of the submissions peddled inaccurate claims about the health effects of 5G.

These falsehoods were prominent enough the committee felt compelled to address the issue in its final report. The report noted:

community confidence in 5G has been shaken by extensive misinformation
preying on the fears of the public spread via the internet, and presented as facts, particularly through social media.

This is a remarkable situation for Australian public policy – it is not common for a parliamentary inquiry to have to rebut the dodgy scientific claims it receives in the form of public submissions.

While many Australians might dismiss these claims as fringe conspiracy theories, the reach of this misinformation matters. If enough people act on the basis of these claims, it can cause harm to the wider public.

In late May, for example, protests against 5G, vaccines and COVID-19 restrictions were held in Sydney, Melbourne and Brisbane. Some protesters claimed 5G was causing COVID-19 and the pandemic was a hoax – a “plandemic” – perpetuated to enslave and subjugate the people to the state.




Read more:
Coronavirus, ‘Plandemic’ and the seven traits of conspiratorial thinking


Misinformation can also lead to violence. Last year, the FBI for the first time identified conspiracy theory-driven extremists as a terrorism threat.

Conspiracy theories that 5G causes autism, cancer and COVID-19 have also led to widespread arson attacks in the UK and Canada, along with verbal and physical attacks on employees of telecommunication companies.

The source of conspiracy messaging

To better understand the nature and origins of the misinformation campaigns against 5G in Australia, I examined the 530 submissions posted online to the parliament’s standing committee on communications and the arts.

The majority of submissions were from private citizens. A sizeable number, however, made claims about the health effects of 5G, parroting language from well-known conspiracy theory websites.

A perceived lack of “consent” (for example, here, here and here) about the planned 5G roll-out featured prominently in these submissions. One person argued she did not agree to allow 5G to be “delivered directly into” the home and “radiate” her family.




Read more:
No, 5G radiation doesn’t cause or spread the coronavirus. Saying it does is destructive


To connect sentiments like this to conspiracy groups, I looked at two well-known conspiracy sites that have been identified as promoting narratives consistent with Russian misinformation operations – the Centre for Research on Globalization (CRG) and Zero Hedge.

CRG is an organisation founded and directed by Michel Chossudovsky, a former professor at the University of Ottawa and opinion writer for Russia Today.

CRG has been flagged by NATO intelligence as part of wider efforts to undermine trust in “government and public institutions” in North America and Europe.

Zero Hedge, which is registered in Bulgaria, attracts millions of readers every month and ranks among the top 500 sites visited in the US. Most stories are geared toward an American audience.

Researchers at Rand have connected Zero Hedge with online influencers and other media sites known for advancing pro-Kremlin narratives, such as the claim that Ukraine, and not Russia, is to blame for the downing of Malaysia Airlines flight MH17.

Protesters targeting the coronavirus lockdown and 5G in Melbourne in May.
Scott Barbour/AAP

How it was used in parliamentary submissions

For my research, I scoured the top posts circulated by these groups on Facebook for false claims about the health threats posed by 5G. Some stories I found had headlines like “13 Reasons 5G Wireless Technology will be a Catastrophe for Humanity” and “Hundreds of Respected Scientists Sound Alarm about Health Effects as 5G Networks go Global”.

I then tracked the diffusion of these stories on Facebook and identified 10 public groups where they were posted. Two of the groups specifically targeted Australians – Australians for Safe Technology, a group with 48,000 members, and Australia Uncensored. Many others, such as the popular right-wing conspiracy group QAnon, also contained posts about the 5G debate in Australia.




Read more:
Conspiracy theories about 5G networks have skyrocketed since COVID-19


To determine the similarities in phrasing between the articles posted on these Facebook groups and submissions to the Australian parliamentary committee, I used the same technique to detect similarities in texts that is commonly used to detect plagiarism in student papers.

The analysis rates similarities in documents on a scale of 0 (entirely dissimilar) to 1 (exactly alike). There were 38 submissions with at least a 0.5 similarity to posts in the Facebook group 5G Network, Microwave Radiation Dangers and other Health Problems and 35 with a 0.5 similarity to the Australians for Safe Technology group.

This is significant because it means that for these 73 submissions, 50% of the language was, word for word, exactly the same as the posts from extreme conspiracy groups on Facebook.

The first 5G Optus tower in the suburb of Dickson in Canberra.
Mick Tsikas/AAP

The impact of misinformation on policy-making

The process for soliciting submissions to a parliamentary inquiry is an important part of our democracy. In theory, it provides ordinary citizens and organisations with a voice in forming policy.

My findings suggest Facebook conspiracy groups and potentially other conspiracy sites are attempting to co-opt this process to directly influence the way Australians think about 5G.

In the pre-internet age, misinformation campaigns often had limited reach and took a significant amount of time to spread. They typically required the production of falsified documents and a sympathetic media outlet. Mainstream news would usually ignore such stories and few people would ever read them.

Today, however, one only needs to create a false social media account and a meme. Misinformation can spread quickly if it is amplified through online trolls and bots.

It can also spread quickly on Facebook, with its algorithm designed to drive ordinary users to extremist groups and pages by exploiting their attraction to divisive content.

And once this manipulative content has been widely disseminated, countering it is like trying to put toothpaste back in the tube.

Misinformation has the potential to undermine faith in governments and institutions and make it more challenging for authorities to make demonstrable improvements in public life. This is why governments need to be more proactive in effectively communicating technical and scientific information, like details about 5G, to the public.

Just as nature abhors a vacuum, a public sphere without trusted voices quickly becomes filled with misinformation.The Conversation

Michael Jensen, Senior Research Fellow, Institute for Governance and Policy Analysis, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

By persisting with COVIDSafe, Australia risks missing out on globally trusted contact tracing


Ritesh Chugh, CQUniversity Australia

Australia has ruled out abandoning the government’s COVIDSafe contact tracing app in favour of the rival “Gapple” model developed by Google and Apple, which is gaining widespread support around the world. Deputy Chief Medical Officer Nick Coatsworth told The Project the COVIDSafe app was “a great platform”.

In the two months since its launch, COVIDSafe has been downloaded just over 6.4 million times – well short of the government’s target of 40% of the Australian population.

Its adoption was plagued by privacy, security and backwards compatibility concerns, and further exacerbated by excessive battery consumption. And despite being described as a vital tool in the response to COVID-19, it is reportedly yet to identify a single infection that hadn’t already been tracked down by manual contact tracing.




Read more:
False positives, false negatives: it’s hard to say if the COVIDSafe app can overcome its shortcomings


It seems the app has failed to win the public’s trust. Software downloads are based on the perceptions of risk and anticipated benefits. In this scenario, the risks appear to outweigh the benefits, despite the dangers of a second coronavirus wave taking hold in our second most populous city.

COVID-19 cases in Melbourne continue to surge. But more broadly, the relatively low number of overall cases in Australia and the lack of adequate buy-in among the public make it difficult for COVIDSafe to make a meaningful contribution.

Is there another way?

Some 91% of Australians have a smartphone, whereas a rough calculation based on the 6.4 million downloads suggests only 28% have downloaded COVIDSafe.

For digital contact tracing to be effective, an uptake of around 60% of the population has been suggested – well beyond even the 40% target which COVIDSafe failed to hit.

The logic is straightforward: we need a system that 60% of people are willing and able to use. And such a system already exists.

Tech giants Apple and Google have collaboratively developed their own contact-tracing technology, dubbed the “Gapple” model.

How does Gapple work?

Gapple is not an app itself, but a framework that provides Bluetooth-based functionality by which contact tracing can work. Crucially, it has several features that lend it more privacy than COVIDSafe.

In simple terms, it allows Android and iOS (Apple) devices to communicate with one another using existing apps from health authorities, using a contact-tracing system built into the phones’ operating systems.

The system offers an opt-in exposure notification system that can alert users if they have been in close promixity to someone diagnosed with COVID-19.

Gapple’s exposure notification system.

Gapple’s decentralised exposure notification system offers more privacy and security than many other contact-tracing technologies, because:

  • it does not collect or track device location

  • data is collected on the users’ phones rather than a centralised server

  • it does not share users’ identities with other people, Apple or Google

  • health authorities do not have direct access to the data

  • users can continue to use the public health authority’s app without opting into the Gapple exposure notifications, and can turn the notification system off if they change their mind.

The system meets many of the basic principles of the American Civil Liberties Union’s criteria for technology-assisted contact tracing. And its exposure notification settings appear in recent updates of both Android and iOS devices. But without an app that uses the Gapple framework, the exposure notification system cannot be used.

COVID-19 Exposure Notification System.

Gapple going global

Global support for the Gapple model is growing. The United Kingdom, many parts of the United States, Switzerland, Latvia, Italy, Canada and Germany are abandoning their native contact-tracing technologies in favour of a model that could achieve much more widespread adoption worldwide.

The ease of communication between different devices will also make Gapple a crucial part of international contact tracing once borders are reopened in the future, and people start to travel.

In this light, it is hard to see why Australia resisted the calls to ditch COVIDSafe and adopt the Gapple model.

Can Australians use Gapple anyway?

No, they can’t, because the Gapple model requires users to download a native app from their region’s public health authority which uses the Gapple exposure notification system. Australia’s decision means that won’t be happening here any time soon.

In grappling with the dilemma between citizens’ civil rights and curbing the growth of the fatal COVID-19 virus, the Gapple model is a trade-off to encourage higher uptake of contact-tracing technologies.




Read more:
70% of people surveyed said they’d download a coronavirus app. Only 44% did. Why the gap?


Ultimately, the Gapple model will be a step forward in the world’s fight against COVID-19, because it will encourage significant numbers of people to use it.

The decision to persist with the COVIDSafe app, rather than adopting an emerging global model, could have severe repercussions for Australians. For any digital contact-tracing technology to work effectively, a large number of people must use it, and COVIDSafe has fallen short of that basic requirement.The Conversation

Ritesh Chugh, Senior Lecturer/Discipline Lead – Information Systems and Analysis, CQUniversity Australia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Smart cities can help us manage post-COVID life, but they’ll need trust as well as tech


Sameer Hasija, INSEAD

“This virus may become just another endemic virus in our communities and this virus may never go away.” WHO executive director Mike Ryan, May 13

Vaccine or not, we have to come to terms with the reality that COVID-19 requires us to rethink how we live. And that includes the idea of smart cities that use advanced technologies to serve citizens. This has become critical in a time of pandemic.




Read more:
Coronavirus recovery: public transport is key to avoid repeating old and unsustainable mistakes


Smart city solutions have already proved handy for curbing the contagion. Examples include:

The robot dog called SPOT is being trialled in Singapore to remind people to practise physical distancing.

But as we prepare to move beyond this crisis, cities need to design systems that are prepared to handle the next pandemic. Better still, they will reduce the chances of another one.

Issues of trust are central

In a world of egalitarian governments and ethical corporations, the solution to a coronavirus-like pandemic would be simple: a complete individual-level track and trace system. It would use geolocation data and CCTV image recognition, complemented by remote biometric sensors. While some such governments and corporations do exist, putting so much information in the hands of a few, without airtight privacy controls, could lay the foundations of an Orwellian world.




Read more:
Darwin’s ‘smart city’ project is about surveillance and control


Our research on smart city challenges suggests a robust solution should be a mix of protocols and norms covering technology, processes and people. To avoid the perils of individual-level monitoring systems, we need to focus on how to leverage technology to modify voluntary citizen behaviour.

This is not a trivial challenge. Desired behaviours that maximise societal benefit may not align with individual preferences in the short run. In part, this could be due to misplaced beliefs or misunderstanding of the long-term consequences.

As an example, despite the rapid spread of COVID-19 in the US, many states have had public protests against lockdowns. A serious proportion of polled Americans believe this pandemic is a hoax, or that its threat is being exaggerated for political reasons.

Design systems that build trust

The first step in modifying people’s behaviour to align with the greater good is to design a system that builds trust between the citizens and the city. Providing citizens with timely and credible information about important issues and busting falsehoods goes a long way in creating trust. It helps people to understand which behaviours are safe and acceptable, and why this is for the benefit of the society and their own long-term interest.

In Singapore, the government has very effectively used social media platforms like WhatsApp, Facebook, Twitter, Instagram and Telegram to regularly share COVID-19 information with citizens.

Densely populated cities in countries like India face extra challenges due to vast disparities in education and the many languages used. Smart city initiatives have emerged there to seamlessly provide citizens with information in their local language via a smartphone app. These include an AI-based myth-busting chatbot.




Read more:
How smart city technology can be used to measure social distancing


Guard against misuse of data

Effective smart city solutions require citizens to volunteer data. For example, keeping citizens updated with real-time information about crowding in a public space depends on collecting individual location data in that space.

Australians’ concerns about the COViDSafe contact-tracing app illustrate the need for transparent safeguards when citizens are asked to share their data.
Lukas Coch/AAP

Individual-level data is also useful to co-ordinate responses during emergencies. Contact tracing, for instance, has emerged as an essential tool in slowing the contagion.

Technology-based smart city initiatives can enable the collection, analysis and reporting of such data. But misuse of data erodes trust, which dissuades citizens from voluntarily sharing their data.

City planners need to think about how they can balance the effectiveness of tech-based solutions with citizens’ privacy concerns. Independent third-party auditing of solutions can help ease these concerns. The MIT Technology Review’s audit report on contact-tracing apps is one example during this pandemic.




Read more:
The trade-offs ‘smart city’ apps like COVIDSafe ask us to make go well beyond privacy


It is also important to create robust data governance policies. These can help foster trust and encourage voluntary sharing of data by citizens.

Using several case studies, the consulting firm PwC has proposed a seven-layer framework for data governance. It describes balancing privacy concerns of citizens and efficacy of smart city initiatives as the “key to realising smart city potential”.

As we emerge from this pandemic, we will need to think carefully about the data governance policies we should implement. It’s important for city officials to learn from early adopters.

While these important issues coming out of smart city design involve our behaviour as citizens, modifying behaviour isn’t enough in itself. Civic leaders also need to rethink the design of our city systems to support citizens in areas like public transport, emergency response, recreational facilities and so on. Active collaboration between city planners, tech firms and citizens will be crucial in orchestrating our future cities and hence our lives.


The author acknowledges suggestions from Aarti Gumaledar, Director of Emergentech Advisors Ltd.The Conversation

Sameer Hasija, Associate Professor of Technology and Operations Management, INSEAD

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Don’t be phish food! Tips to avoid sharing your personal information online



Shutterstock

Nik Thompson, Curtin University

Data is the new oil, and online platforms will siphon it off at any opportunity. Platforms increasingly demand our personal information in exchange for a service.

Avoiding online services altogether can limit your participation in society, so the advice to just opt out is easier said than done.

Here are some tricks you can use to avoid giving online platforms your personal information. Some ways to limit your exposure include using “alternative facts”, using guest check-out options, and a burner email.

Alternative facts

While “alternative facts” is a term coined by White House press staff to describe factual inaccuracies, in this context it refers to false details supplied in place of your personal information.




Read more:
Hackers are now targeting councils and governments, threatening to leak citizen data


This is an effective strategy to avoid giving out information online. Though platforms might insist you complete a user profile, they can do little to check if that information is correct. For example, they can check whether a phone number contains the correct amount of digits, or if an email address has a valid format, but that’s about it.

When a website requests your date of birth, address, or name, consider how this information will be used and whether you’re prepared to hand it over.

There’s a distinction to be made between which platforms do or don’t warrant using your real information. If it’s an official banking or educational institute website, then it’s important to be truthful.

But an online shopping, gaming, or movie review site shouldn’t require the same level of disclosure, and using an alternative identity could protect you.

Secret shopper

Online stores and services often encourage users to set up a profile, offering convenience in exchange for information. Stores value your profile data, as it can provide them additional revenue through targeted advertising and emails.

But many websites also offer a guest checkout option to streamline the purchase process. After all, one thing as valuable as your data is your money.

So unless you’re making very frequent purchases from a site, use guest checkout and skip profile creation altogether. Even without disclosing extra details, you can still track your delivery, as tracking is provided by transport companies (and not the store).

Also consider your payment options. Many credit cards and payment merchants such as PayPal provide additional buyer protection, adding another layer of separation between you and the website.

Avoid sharing your bank account details online, and instead use an intermediary such as PayPal, or a credit card, to provide additional protection.

If you use a credit card (even prepaid), then even if your details are compromised, any potential losses are limited to the card balance. Also, with credit cards this balance is effectively the bank’s funds, meaning you won’t be charged out of pocket for any fraudulent transactions.

Burner emails

An email address is usually the first item a site requests.

They also often require email verification when a profile is created, and that verification email is probably the only one you’ll ever want to receive from the site. So rather than handing over your main email address, consider a burner email.

This is a fully functional but disposable email address that remains active for about 10 minutes. You can get one for free from online services including Maildrop, Guerilla Mail and 10 Minute Mail.

Just make sure you don’t forget your password, as you won’t be able to recover it once your burner email becomes inactive.

The 10 Minute Mail website offers free burner emails.
screenshot

The risk of being honest

Every online profile containing your personal information is another potential target for attackers. The more profiles you make, the greater the chance of your details being breached.

A breach in one place can lead to others. Names and emails alone are sufficient for email phishing attacks. And a phish becomes more convincing (and more likely to succeed) when paired with other details such as your recent purchasing history.

Surveys indicate about half of us recycle passwords across multiple sites. While this is convenient, it means if a breach at one site reveals your password, then attackers can hack into your other accounts.

In fact, even just an email address is a valuable piece of intelligence, as emails are used as a login for many sites, and a login (unlike a password) can sometimes be impossible to change.

Obtaining your email could open the door for targeted attacks on your other accounts, such as social media accounts.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


In “password spraying” attacks“, cybercriminals test common passwords against many emails/usernames in hopes of landing a correct combination.

The bottom line is, the safest information is the information you never release. And practising alternatives to disclosing your true details could go a long way to limiting your data being used against you.The Conversation

Nik Thompson, Senior Lecturer, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The coronavirus pandemic is boosting the big tech transformation to warp speed


Zac Rogers, Flinders University

The coronavirus pandemic has sped up changes that were already happening across society, from remote learning and work to e-health, supply chains and logistics, policing, welfare and beyond. Big tech companies have not hesitated to make the most of the crisis.

In New York for example, former Google chief executive Eric Schmidt is leading a panel tasked with transforming the city after the pandemic, “focused on telehealth, remote learning, and broadband”. Microsoft founder Bill Gates has also been called in, to help create “a smarter education system”.

The government, health, education and defence sectors have long been prime targets for “digital disruption”. The American business expert Scott Galloway and others have argued they are irresistible pools of demand for the big tech firms.

As author and activist Naomi Klein writes, changes in these and other areas of our lives are about to see “a warp-speed acceleration”.

All these transformations will follow a similar model: using automated platforms to gather and analyse data via online surveillance, then using it to predict and intervene in human behaviour.




Read more:
Explainer: what is surveillance capitalism and how does it shape our economy?


The control revolution

The changes now under way are the latest phase of a socio-technical transformation that sociologist James Beniger, writing in the 1980s, called a “control revolution”. This revolution began with the use of electronic systems for information gathering and communication to facilitate mass production and distribution of goods in the 19th century.

After World War II the revolution accelerated as governments and industry began to embrace cybernetics, the scientific study of control and communication. Even before COVID-19, we were already in the “reflexive phase” of the control revolution, in which big data and predictive technologies have been turned to the goal of automating human behaviour.

The next phase is what we might call the “uberisation of everything”: replacing existing institutions and processes of government with computational code, in the same way Uber replaced government-regulated taxi systems with a smartphone app.




Read more:
The ‘Uberisation’ of work is driving people to co-operatives


Information economics

Beginning in the 1940s, the work of information theory pioneer Claude Shannon had a deep effect on economists, who saw analogies between signals in electrical circuits and many systems in society. Chief among these new information economists was Leonid Hurwicz, winner of a 2007 Nobel Prize for his work on “mechanism design theory”.

Information theorist Claude Shannon also conducted early experiments in artificial intelligence, including the creation of a maze-solving mechanical mouse.
Bell Labs

Economists have pursued analogies between human and mechanical systems ever since, in part because they lend themselves to modelling, calculation and prediction.

These analogies helped usher in a new economic orthodoxy formed around the ideas of F.A. Hayek, who believed the problem of allocating resources in society was best understood in terms of information processing.

By the 1960s, Hayek had come to view thinking individuals as almost superfluous to the operation of the economy. A better way to allocate resources was to leave decisions to “the market”, which he saw as an omniscient information processor.

Putting information-processing first turned economics on its head. The economic historians Philip Mirowski and Edward Nik-Khah argue economists moved from “ensuring markets give people what they want” to insisting they can make markets produce “any desired outcome regardless of what people want”.

By the 1990s this orthodoxy was triumphant across much of the world. By the late 2000s it was so deeply enmeshed that even the global financial crisis – a market failure of catastrophic proportions – could not dislodge it.




Read more:
We should all beware a resurgent financial sector


Market society

This orthodoxy holds that if information markets make for efficient resource allocation, it makes sense to put them in charge. We’ve seen many kinds of decisions turned over to automated data-driven markets, designed as auctions.

Online advertising illustrates how this works. First, the data generated by each visitor to a page is gathered, analysed and categorised, with each category acquiring a predictive probability of a given behaviour: buying a given product or service.

Then an automated auction occurs at speed as a web page is loading, matching these behavioural probabilities with clients’ products and services. The goal is to “nudge” the user’s behaviour. As Douglas Rushkoff explains, someone in a category that is 80% likely to do a certain thing might be manipulated up to 85% or 90% if they are shown the right ad.




Read more:
Is it time to regulate targeted ads and the web giants that profit from them?


This model is being scaled up to treat society as a whole as a vast signalling device. All human behaviour can be taken as a bid in an invisible auction that aims to optimise resource allocation.

To gather the bids, however, the market needs ever greater awareness of human behaviour. That means total surveillance is here to stay, and will get more intense and pervasive.

Growing surveillance combined with algorithmic interventions in human behaviour constrain our choices to an ever greater extent. Being nudged from an 80% to an 85% chance of doing something might seem innocuous, but that diminishing 20% of unpredictability is the site of human creativity, learning, discovery and choice. Becoming more predictable also means becoming more fragile.

In praise of obscurity

The pandemic has pushed many of us into doing even more by digital means, hitting fast-forward on the growth of surveillance and algorithmic influence, bringing more and more human behaviour into the realm of statistical probability and manipulation.

Concerns about total surveillance are often couched as discussions of privacy, but now is the time to think about the importance of obscurity. Obscurity moves beyond questions of privacy and anonymity to the issue, as Matthew Crawford identifies, of our “qualitative experience of institutional authority”. Obscurity is a buffer zone – a space to be an unobserved, uncategorised, unoptimised human – from which a citizen can enact her democratic rights.

The onrush of digitisation caused by the pandemic may have a positive effect, if the body politic senses the urgency of coming to terms with the widening gap between fast-moving technology and its institutions.

The algorithmic market, left to its optimisation function, may well eventually come to see obscurity an act of economic terrorism. Such an approach cannot form the basis of institutional authority in a democracy. It’s time to address the real implications of digital technology.




Read more:
A ‘coup des gens’ is underway – and we’re increasingly living under the regime of the algorithm


The Conversation


Zac Rogers, Research Lead, Jeff Bleich Centre for the US Alliance in Digital Technology, Security, and Governance, Flinders University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Internet traffic is growing 25% each year. We created a fingernail-sized chip that can help the NBN keep up


<

This tiny micro-comb chip produces a precision rainbow of light that can support transmission of 40 terabits of data per second in standard optic fibres.
Corcoran et al., N.Comms., 2020, CC BY-SA

Bill Corcoran, Monash University

Our internet connections have never been more important to us, nor have they been under such strain. As the COVID-19 pandemic has made remote working, remote socialisation, and online entertainment the norm, we have seen an unprecedented spike in society’s demand for data.

Singapore’s prime minister declared broadband to be essential infrastructure. The European Union asked streaming services to limit their traffic. Video conferencing service Zoom was suddenly unavoidable. Even my parents have grown used to reading to my four-year-old over Skype.

In Australia telecommunications companies have supported this growth, with Telstra removing data caps on users and the National Broadband Network (NBN) enabling ISPs to expand their network capacity. In fact, the NBN saw its highest ever peak capacity of 13.8 terabits per second (or Tbps) on April 8 this year. A terabit is one trillion bits, and 1 Tbps is the equivalent of about 40,000 standard NBN connections.




Read more:
Around 50% of homes in Sydney, Melbourne and Brisbane have the oldest NBN technology


This has given us a glimpse of the capacity crunch we could be facing in the near future, as high-speed 5G wireless connections, self-driving cars and the internet of things put more stress on our networks. Internet traffic is growing by 25% each year as society becomes increasingly connected.

We need new technological solutions to expand data infrastructure, without breaking the bank. The key to this is making devices that can transmit and receive massive amounts of data using the optical fibre infrastructure we have already spent time and money putting into the ground.

A high-speed rainbow

Fortunately, such a device is at hand. My colleagues and I have demonstrated a new fingernail-sized chip that can transmit data at 40 Tbps through a single optical fibre connection of the same kind used in the NBN. That’s about three times the record data rate for the entire NBN network and about 100 times the speed of any single device currently used in Australian fibre networks.

The chip uses an “optical micro-comb” to create a rainbow of infrared light that allows data to be transmitted with many frequencies of light at the same time. Our results are published in Nature Communications today.

This collaboration, between Monash, RMIT and Swinburne universities in Melbourne, and international partners (INRS, CIOPM Xi’an, CityU Hong Kong), is the first “field-trial” of an optical micro-comb system, and a record capacity for such a device.

The internet runs on light

Optical fibres have formed the backbone of our communication systems since the late 1980s. The fibres that link the world together carry light signals that are periodically boosted by optical amplifiers which can transmit light with a huge range of wavelengths.

To make the most of this range of wavelengths, different information is sent using signals of different infrared “colours” of light. If you’ve ever seen a prism split up white light into separate colours, you’ve got an insight into how this works – we can add a bunch of these colours together, send the combined signal through a single optical fibre, then split it back up again into the original colours at the other end.




Read more:
What should be done with the NBN in the long run?


Making powerful rainbows from tiny chips

Optical micro-combs are tiny gadgets that in essence use a single laser, a temperature-controlled chip, and a tiny ring called an optical resonator to send out signals using many different wavelengths of light.

(left) Micrograph of the optical ring resonator on the chip. Launching light from a single laser into this chip generates over 100 new laser lines (right). We use 80 lines in the optical C-band (right, green shaded) for our communications system demonstration.
Corcoran et al, N.Comms, 2020

Optical combs have had a major impact on a massive range of research in optics and photonics. Optical microcombs are miniature devices that can produce optical combs, and have been used in a wide range of exciting demonstrations, including optical communications.

The key to micro-combs are optical resonator structures, tiny rings (see picture above) that when hit with enough light convert the incoming single wavelength into a precise rainbow of wavelengths.

The demonstration

The test was carried out on a 75-km optical fibre loop in Melbourne.

For our demonstration transmitting data at 40 Tbps, we used a novel kind of micro-comb called a “soliton crystal” that produces 80 separate wavelengths of light that can carry different signals at the same time. To prove the micro-comb could be used in a real-world environment, we transmitted the data through installed optical fibres in Melbourne (provided by AARNet) between RMIT’s City campus and Monash’s Clayton campus and back, for a round trip of 75 kilometres.

This shows that the optical fibres we have in the ground today can handle huge capacity growth, simply by changing what we plug into those fibres.

What’s next?

There is more work to do! Monash and RMIT are working together to make the micro-comb devices more flexible and simpler to run.

Putting not only the micro-comb, but also the modulators that turn an electrical signal into an optical signal, on a single chip is a tremendous technical challenge.

There are new frontiers of optical communications to explore with these micro-combs, looking at using parallel paths in space, improving data rates for satellite communications, and in making “light that thinks”: artificial optical neural networks. The future is bright for these tiny rainbows.


We gratefully acknowledge support from Australia’s Academic Research Network (AARNet) for supporting our access to the field-trial cabling through the Australian Lightwave Infrastructure Research Testbed (ALIRT), and in particular Tim Rayner, John Nicholls, Anna Van, Jodie O’Donohoe and Stuart Robinson.The Conversation

Bill Corcoran, Lecturer & Research Fellow, Monash Photonic Communications Lab & InPAC, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.