Smart cities can help us manage post-COVID life, but they’ll need trust as well as tech


Sameer Hasija, INSEAD

“This virus may become just another endemic virus in our communities and this virus may never go away.” WHO executive director Mike Ryan, May 13

Vaccine or not, we have to come to terms with the reality that COVID-19 requires us to rethink how we live. And that includes the idea of smart cities that use advanced technologies to serve citizens. This has become critical in a time of pandemic.




Read more:
Coronavirus recovery: public transport is key to avoid repeating old and unsustainable mistakes


Smart city solutions have already proved handy for curbing the contagion. Examples include:

The robot dog called SPOT is being trialled in Singapore to remind people to practise physical distancing.

But as we prepare to move beyond this crisis, cities need to design systems that are prepared to handle the next pandemic. Better still, they will reduce the chances of another one.

Issues of trust are central

In a world of egalitarian governments and ethical corporations, the solution to a coronavirus-like pandemic would be simple: a complete individual-level track and trace system. It would use geolocation data and CCTV image recognition, complemented by remote biometric sensors. While some such governments and corporations do exist, putting so much information in the hands of a few, without airtight privacy controls, could lay the foundations of an Orwellian world.




Read more:
Darwin’s ‘smart city’ project is about surveillance and control


Our research on smart city challenges suggests a robust solution should be a mix of protocols and norms covering technology, processes and people. To avoid the perils of individual-level monitoring systems, we need to focus on how to leverage technology to modify voluntary citizen behaviour.

This is not a trivial challenge. Desired behaviours that maximise societal benefit may not align with individual preferences in the short run. In part, this could be due to misplaced beliefs or misunderstanding of the long-term consequences.

As an example, despite the rapid spread of COVID-19 in the US, many states have had public protests against lockdowns. A serious proportion of polled Americans believe this pandemic is a hoax, or that its threat is being exaggerated for political reasons.

Design systems that build trust

The first step in modifying people’s behaviour to align with the greater good is to design a system that builds trust between the citizens and the city. Providing citizens with timely and credible information about important issues and busting falsehoods goes a long way in creating trust. It helps people to understand which behaviours are safe and acceptable, and why this is for the benefit of the society and their own long-term interest.

In Singapore, the government has very effectively used social media platforms like WhatsApp, Facebook, Twitter, Instagram and Telegram to regularly share COVID-19 information with citizens.

Densely populated cities in countries like India face extra challenges due to vast disparities in education and the many languages used. Smart city initiatives have emerged there to seamlessly provide citizens with information in their local language via a smartphone app. These include an AI-based myth-busting chatbot.




Read more:
How smart city technology can be used to measure social distancing


Guard against misuse of data

Effective smart city solutions require citizens to volunteer data. For example, keeping citizens updated with real-time information about crowding in a public space depends on collecting individual location data in that space.

Australians’ concerns about the COViDSafe contact-tracing app illustrate the need for transparent safeguards when citizens are asked to share their data.
Lukas Coch/AAP

Individual-level data is also useful to co-ordinate responses during emergencies. Contact tracing, for instance, has emerged as an essential tool in slowing the contagion.

Technology-based smart city initiatives can enable the collection, analysis and reporting of such data. But misuse of data erodes trust, which dissuades citizens from voluntarily sharing their data.

City planners need to think about how they can balance the effectiveness of tech-based solutions with citizens’ privacy concerns. Independent third-party auditing of solutions can help ease these concerns. The MIT Technology Review’s audit report on contact-tracing apps is one example during this pandemic.




Read more:
The trade-offs ‘smart city’ apps like COVIDSafe ask us to make go well beyond privacy


It is also important to create robust data governance policies. These can help foster trust and encourage voluntary sharing of data by citizens.

Using several case studies, the consulting firm PwC has proposed a seven-layer framework for data governance. It describes balancing privacy concerns of citizens and efficacy of smart city initiatives as the “key to realising smart city potential”.

As we emerge from this pandemic, we will need to think carefully about the data governance policies we should implement. It’s important for city officials to learn from early adopters.

While these important issues coming out of smart city design involve our behaviour as citizens, modifying behaviour isn’t enough in itself. Civic leaders also need to rethink the design of our city systems to support citizens in areas like public transport, emergency response, recreational facilities and so on. Active collaboration between city planners, tech firms and citizens will be crucial in orchestrating our future cities and hence our lives.


The author acknowledges suggestions from Aarti Gumaledar, Director of Emergentech Advisors Ltd.The Conversation

Sameer Hasija, Associate Professor of Technology and Operations Management, INSEAD

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Don’t be phish food! Tips to avoid sharing your personal information online



Shutterstock

Nik Thompson, Curtin University

Data is the new oil, and online platforms will siphon it off at any opportunity. Platforms increasingly demand our personal information in exchange for a service.

Avoiding online services altogether can limit your participation in society, so the advice to just opt out is easier said than done.

Here are some tricks you can use to avoid giving online platforms your personal information. Some ways to limit your exposure include using “alternative facts”, using guest check-out options, and a burner email.

Alternative facts

While “alternative facts” is a term coined by White House press staff to describe factual inaccuracies, in this context it refers to false details supplied in place of your personal information.




Read more:
Hackers are now targeting councils and governments, threatening to leak citizen data


This is an effective strategy to avoid giving out information online. Though platforms might insist you complete a user profile, they can do little to check if that information is correct. For example, they can check whether a phone number contains the correct amount of digits, or if an email address has a valid format, but that’s about it.

When a website requests your date of birth, address, or name, consider how this information will be used and whether you’re prepared to hand it over.

There’s a distinction to be made between which platforms do or don’t warrant using your real information. If it’s an official banking or educational institute website, then it’s important to be truthful.

But an online shopping, gaming, or movie review site shouldn’t require the same level of disclosure, and using an alternative identity could protect you.

Secret shopper

Online stores and services often encourage users to set up a profile, offering convenience in exchange for information. Stores value your profile data, as it can provide them additional revenue through targeted advertising and emails.

But many websites also offer a guest checkout option to streamline the purchase process. After all, one thing as valuable as your data is your money.

So unless you’re making very frequent purchases from a site, use guest checkout and skip profile creation altogether. Even without disclosing extra details, you can still track your delivery, as tracking is provided by transport companies (and not the store).

Also consider your payment options. Many credit cards and payment merchants such as PayPal provide additional buyer protection, adding another layer of separation between you and the website.

Avoid sharing your bank account details online, and instead use an intermediary such as PayPal, or a credit card, to provide additional protection.

If you use a credit card (even prepaid), then even if your details are compromised, any potential losses are limited to the card balance. Also, with credit cards this balance is effectively the bank’s funds, meaning you won’t be charged out of pocket for any fraudulent transactions.

Burner emails

An email address is usually the first item a site requests.

They also often require email verification when a profile is created, and that verification email is probably the only one you’ll ever want to receive from the site. So rather than handing over your main email address, consider a burner email.

This is a fully functional but disposable email address that remains active for about 10 minutes. You can get one for free from online services including Maildrop, Guerilla Mail and 10 Minute Mail.

Just make sure you don’t forget your password, as you won’t be able to recover it once your burner email becomes inactive.

The 10 Minute Mail website offers free burner emails.
screenshot

The risk of being honest

Every online profile containing your personal information is another potential target for attackers. The more profiles you make, the greater the chance of your details being breached.

A breach in one place can lead to others. Names and emails alone are sufficient for email phishing attacks. And a phish becomes more convincing (and more likely to succeed) when paired with other details such as your recent purchasing history.

Surveys indicate about half of us recycle passwords across multiple sites. While this is convenient, it means if a breach at one site reveals your password, then attackers can hack into your other accounts.

In fact, even just an email address is a valuable piece of intelligence, as emails are used as a login for many sites, and a login (unlike a password) can sometimes be impossible to change.

Obtaining your email could open the door for targeted attacks on your other accounts, such as social media accounts.




Read more:
The ugly truth: tech companies are tracking and misusing our data, and there’s little we can do


In “password spraying” attacks“, cybercriminals test common passwords against many emails/usernames in hopes of landing a correct combination.

The bottom line is, the safest information is the information you never release. And practising alternatives to disclosing your true details could go a long way to limiting your data being used against you.The Conversation

Nik Thompson, Senior Lecturer, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The coronavirus pandemic is boosting the big tech transformation to warp speed


Zac Rogers, Flinders University

The coronavirus pandemic has sped up changes that were already happening across society, from remote learning and work to e-health, supply chains and logistics, policing, welfare and beyond. Big tech companies have not hesitated to make the most of the crisis.

In New York for example, former Google chief executive Eric Schmidt is leading a panel tasked with transforming the city after the pandemic, “focused on telehealth, remote learning, and broadband”. Microsoft founder Bill Gates has also been called in, to help create “a smarter education system”.

The government, health, education and defence sectors have long been prime targets for “digital disruption”. The American business expert Scott Galloway and others have argued they are irresistible pools of demand for the big tech firms.

As author and activist Naomi Klein writes, changes in these and other areas of our lives are about to see “a warp-speed acceleration”.

All these transformations will follow a similar model: using automated platforms to gather and analyse data via online surveillance, then using it to predict and intervene in human behaviour.




Read more:
Explainer: what is surveillance capitalism and how does it shape our economy?


The control revolution

The changes now under way are the latest phase of a socio-technical transformation that sociologist James Beniger, writing in the 1980s, called a “control revolution”. This revolution began with the use of electronic systems for information gathering and communication to facilitate mass production and distribution of goods in the 19th century.

After World War II the revolution accelerated as governments and industry began to embrace cybernetics, the scientific study of control and communication. Even before COVID-19, we were already in the “reflexive phase” of the control revolution, in which big data and predictive technologies have been turned to the goal of automating human behaviour.

The next phase is what we might call the “uberisation of everything”: replacing existing institutions and processes of government with computational code, in the same way Uber replaced government-regulated taxi systems with a smartphone app.




Read more:
The ‘Uberisation’ of work is driving people to co-operatives


Information economics

Beginning in the 1940s, the work of information theory pioneer Claude Shannon had a deep effect on economists, who saw analogies between signals in electrical circuits and many systems in society. Chief among these new information economists was Leonid Hurwicz, winner of a 2007 Nobel Prize for his work on “mechanism design theory”.

Information theorist Claude Shannon also conducted early experiments in artificial intelligence, including the creation of a maze-solving mechanical mouse.
Bell Labs

Economists have pursued analogies between human and mechanical systems ever since, in part because they lend themselves to modelling, calculation and prediction.

These analogies helped usher in a new economic orthodoxy formed around the ideas of F.A. Hayek, who believed the problem of allocating resources in society was best understood in terms of information processing.

By the 1960s, Hayek had come to view thinking individuals as almost superfluous to the operation of the economy. A better way to allocate resources was to leave decisions to “the market”, which he saw as an omniscient information processor.

Putting information-processing first turned economics on its head. The economic historians Philip Mirowski and Edward Nik-Khah argue economists moved from “ensuring markets give people what they want” to insisting they can make markets produce “any desired outcome regardless of what people want”.

By the 1990s this orthodoxy was triumphant across much of the world. By the late 2000s it was so deeply enmeshed that even the global financial crisis – a market failure of catastrophic proportions – could not dislodge it.




Read more:
We should all beware a resurgent financial sector


Market society

This orthodoxy holds that if information markets make for efficient resource allocation, it makes sense to put them in charge. We’ve seen many kinds of decisions turned over to automated data-driven markets, designed as auctions.

Online advertising illustrates how this works. First, the data generated by each visitor to a page is gathered, analysed and categorised, with each category acquiring a predictive probability of a given behaviour: buying a given product or service.

Then an automated auction occurs at speed as a web page is loading, matching these behavioural probabilities with clients’ products and services. The goal is to “nudge” the user’s behaviour. As Douglas Rushkoff explains, someone in a category that is 80% likely to do a certain thing might be manipulated up to 85% or 90% if they are shown the right ad.




Read more:
Is it time to regulate targeted ads and the web giants that profit from them?


This model is being scaled up to treat society as a whole as a vast signalling device. All human behaviour can be taken as a bid in an invisible auction that aims to optimise resource allocation.

To gather the bids, however, the market needs ever greater awareness of human behaviour. That means total surveillance is here to stay, and will get more intense and pervasive.

Growing surveillance combined with algorithmic interventions in human behaviour constrain our choices to an ever greater extent. Being nudged from an 80% to an 85% chance of doing something might seem innocuous, but that diminishing 20% of unpredictability is the site of human creativity, learning, discovery and choice. Becoming more predictable also means becoming more fragile.

In praise of obscurity

The pandemic has pushed many of us into doing even more by digital means, hitting fast-forward on the growth of surveillance and algorithmic influence, bringing more and more human behaviour into the realm of statistical probability and manipulation.

Concerns about total surveillance are often couched as discussions of privacy, but now is the time to think about the importance of obscurity. Obscurity moves beyond questions of privacy and anonymity to the issue, as Matthew Crawford identifies, of our “qualitative experience of institutional authority”. Obscurity is a buffer zone – a space to be an unobserved, uncategorised, unoptimised human – from which a citizen can enact her democratic rights.

The onrush of digitisation caused by the pandemic may have a positive effect, if the body politic senses the urgency of coming to terms with the widening gap between fast-moving technology and its institutions.

The algorithmic market, left to its optimisation function, may well eventually come to see obscurity an act of economic terrorism. Such an approach cannot form the basis of institutional authority in a democracy. It’s time to address the real implications of digital technology.




Read more:
A ‘coup des gens’ is underway – and we’re increasingly living under the regime of the algorithm


The Conversation


Zac Rogers, Research Lead, Jeff Bleich Centre for the US Alliance in Digital Technology, Security, and Governance, Flinders University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Internet traffic is growing 25% each year. We created a fingernail-sized chip that can help the NBN keep up


<

This tiny micro-comb chip produces a precision rainbow of light that can support transmission of 40 terabits of data per second in standard optic fibres.
Corcoran et al., N.Comms., 2020, CC BY-SA

Bill Corcoran, Monash University

Our internet connections have never been more important to us, nor have they been under such strain. As the COVID-19 pandemic has made remote working, remote socialisation, and online entertainment the norm, we have seen an unprecedented spike in society’s demand for data.

Singapore’s prime minister declared broadband to be essential infrastructure. The European Union asked streaming services to limit their traffic. Video conferencing service Zoom was suddenly unavoidable. Even my parents have grown used to reading to my four-year-old over Skype.

In Australia telecommunications companies have supported this growth, with Telstra removing data caps on users and the National Broadband Network (NBN) enabling ISPs to expand their network capacity. In fact, the NBN saw its highest ever peak capacity of 13.8 terabits per second (or Tbps) on April 8 this year. A terabit is one trillion bits, and 1 Tbps is the equivalent of about 40,000 standard NBN connections.




Read more:
Around 50% of homes in Sydney, Melbourne and Brisbane have the oldest NBN technology


This has given us a glimpse of the capacity crunch we could be facing in the near future, as high-speed 5G wireless connections, self-driving cars and the internet of things put more stress on our networks. Internet traffic is growing by 25% each year as society becomes increasingly connected.

We need new technological solutions to expand data infrastructure, without breaking the bank. The key to this is making devices that can transmit and receive massive amounts of data using the optical fibre infrastructure we have already spent time and money putting into the ground.

A high-speed rainbow

Fortunately, such a device is at hand. My colleagues and I have demonstrated a new fingernail-sized chip that can transmit data at 40 Tbps through a single optical fibre connection of the same kind used in the NBN. That’s about three times the record data rate for the entire NBN network and about 100 times the speed of any single device currently used in Australian fibre networks.

The chip uses an “optical micro-comb” to create a rainbow of infrared light that allows data to be transmitted with many frequencies of light at the same time. Our results are published in Nature Communications today.

This collaboration, between Monash, RMIT and Swinburne universities in Melbourne, and international partners (INRS, CIOPM Xi’an, CityU Hong Kong), is the first “field-trial” of an optical micro-comb system, and a record capacity for such a device.

The internet runs on light

Optical fibres have formed the backbone of our communication systems since the late 1980s. The fibres that link the world together carry light signals that are periodically boosted by optical amplifiers which can transmit light with a huge range of wavelengths.

To make the most of this range of wavelengths, different information is sent using signals of different infrared “colours” of light. If you’ve ever seen a prism split up white light into separate colours, you’ve got an insight into how this works – we can add a bunch of these colours together, send the combined signal through a single optical fibre, then split it back up again into the original colours at the other end.




Read more:
What should be done with the NBN in the long run?


Making powerful rainbows from tiny chips

Optical micro-combs are tiny gadgets that in essence use a single laser, a temperature-controlled chip, and a tiny ring called an optical resonator to send out signals using many different wavelengths of light.

(left) Micrograph of the optical ring resonator on the chip. Launching light from a single laser into this chip generates over 100 new laser lines (right). We use 80 lines in the optical C-band (right, green shaded) for our communications system demonstration.
Corcoran et al, N.Comms, 2020

Optical combs have had a major impact on a massive range of research in optics and photonics. Optical microcombs are miniature devices that can produce optical combs, and have been used in a wide range of exciting demonstrations, including optical communications.

The key to micro-combs are optical resonator structures, tiny rings (see picture above) that when hit with enough light convert the incoming single wavelength into a precise rainbow of wavelengths.

The demonstration

The test was carried out on a 75-km optical fibre loop in Melbourne.

For our demonstration transmitting data at 40 Tbps, we used a novel kind of micro-comb called a “soliton crystal” that produces 80 separate wavelengths of light that can carry different signals at the same time. To prove the micro-comb could be used in a real-world environment, we transmitted the data through installed optical fibres in Melbourne (provided by AARNet) between RMIT’s City campus and Monash’s Clayton campus and back, for a round trip of 75 kilometres.

This shows that the optical fibres we have in the ground today can handle huge capacity growth, simply by changing what we plug into those fibres.

What’s next?

There is more work to do! Monash and RMIT are working together to make the micro-comb devices more flexible and simpler to run.

Putting not only the micro-comb, but also the modulators that turn an electrical signal into an optical signal, on a single chip is a tremendous technical challenge.

There are new frontiers of optical communications to explore with these micro-combs, looking at using parallel paths in space, improving data rates for satellite communications, and in making “light that thinks”: artificial optical neural networks. The future is bright for these tiny rainbows.


We gratefully acknowledge support from Australia’s Academic Research Network (AARNet) for supporting our access to the field-trial cabling through the Australian Lightwave Infrastructure Research Testbed (ALIRT), and in particular Tim Rayner, John Nicholls, Anna Van, Jodie O’Donohoe and Stuart Robinson.The Conversation

Bill Corcoran, Lecturer & Research Fellow, Monash Photonic Communications Lab & InPAC, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

False positives, false negatives: it’s hard to say if the COVIDSafe app can overcome its shortcomings



Shutterstock

Dinesh Kumar, RMIT University and Pj Radcliffe, RMIT University

The Australian government’s contact-tracing app, COVIDSafe, has been touted as crucial for restarting the country’s economy and curbing COVID-19’s spread.

But until more data are collected, it’s hard to estimate how effective the app will be. Nonetheless, there are some predictable situations in which COVIDSafe’s design may mean it will struggle to fulfil its purpose.

False positives

COVIDSafe uses Bluetooth to digitally “trace” people with whom a user has come into contact, with the aim of alerting anyone who has interacted with a confirmed COVID-19 case. But this technology carries a risk of “false positives”, wherein a user may be falsely alerted despite not actually having come into contact with the virus.

This is because Bluetooth radio waves pass through walls and glass. They can only measure how physically close two people are; they can’t tell whether those people are in the same room, in different rooms, or even in different cars passing each other.

In a high-density apartment building, depending on the strength of Bluetooth signals, it’s possible COVIDSafe could falsely alert plenty of people.




Read more:
As coronavirus forces us to keep our distance, city density matters less than internal density


The Department of Health has acknowledged this complication, saying:

If this happens and one of the contacts is identified as having coronavirus, state and territory health officials will talk to the people to work out if this was a legitimate contact or not.

Nonetheless, this process may cause unnecessary distress, and could also have negative flow-on effects on the economy by keeping people home unnecessarily. False positives could also erode public trust in the app’s effectiveness.

False negatives

On the other side of the coin, COVIDSafe also has the potential for “false negatives”. Simply, it will not identify non-human-to-human transmission of the virus.

We know COVID-19 can survive on different surfaces for various periods of time. COVIDSafe would not be able to alert people exposed to the virus via a solid surface, such as a shopping trolley or elevator button, if the person who contaminated that surface had already left the scene.

COVIDSafe is also not helpful in the case of users who become infected with COVID-19 but remain asymptomatic. Such a person may never get tested and upload their contact data to the app’s central data store, but may still be able to pass the virus to those around them. More data is needed on asymptomatic transmission.




Read more:
Why do some people with coronavirus get symptoms while others don’t?


And regarding the decision to classify “close contacts” as people who have been within a 1.5m distance for 15 minutes – this may have been based on research from Japan for when people are in an open space, and the air is moving.

However, this research also showed micro-droplets remained suspended in the air for 20 minutes in enclosed spaces. Thus, the 1.5m for 15 minutes rule may be questionable for indoor settings.

Downloads vs usage

Recently, Iceland’s contact tracing app achieved the highest penetration of any such app in the world, with almost 40% of the population opting in. But Icelandic Police Service detective inspector Gestur Pálmason – who has overseen contact tracing efforts – said while it was useful in a few cases, the app “wasn’t a game-changer”.

Australia’s Prime Minister Scott Morrison has said on multiple occasions COVIDSafe requires a 40% uptake to be effective.

Since then, federal health minister Greg Hunt has said there’s “no magic figure, but every set of people that download will make it easier and help”. This was echoed more recently by Department of Health acting secretary Caroline Edwards, who told a Senate committee there was no specific uptake goal within her team.

Past modelling revealed infection could be controlled if more than 70% of the population were taking the necessary precautions. It’s unclear what science (if any) was forming the basis of Australia’s initial 40% uptake goal for COVIDSafe.

This goal is also lower than proposed figures from other experts around the world, who have suggested goals varying from 50-70%, and 80% for UK smartphone owners. But the fact is, these figures are estimates and are difficult to test for accuracy.

A survey conducted by University of Sydney researchers suggested in Sydney and Melbourne, COVIDSafe’s uptake could already be at 40% – but lower in other places.
Shutterstock



Read more:
In some places 40% of us may have downloaded COVIDSafe. Here’s why the government should share what it knows


Demographic bias

There are many other uncertainties about COVIDSafe’s effectiveness.

We lack data on whether the app is actually being downloaded by those most at risk. This may include:

We also know COVIDSafe doesn’t work properly on iPhones and some older model mobile phones. And older devices are more likely to be owned by those who are elderly, or less financially privileged.

What’s more, COVIDSafe can’t fulfil its contact tracing potential until it’s downloaded by a critical mass of people who have already contracted the virus. At this stage, the more people infected with COVID-19 that download the app, the better.

A tough nut to crack

Implementing a contact tracing app is a difficult task for our leaders and medical experts. This is because much remains unknown about the COVID-19 virus, and how people will continue to respond to rules as restrictions lift around the country.

Predictions of the disease’s spread have also shown a lot of variation.

Thus, there are many unknowns making it impossible to predict the outcome. The important thing is for people to not start taking risks just because they’ve downloaded COVIDSafe.

And while the government pushes for more downloads and reopening the economy, ongoing reviews will be crucial to improving the app’s functionality.The Conversation

Dinesh Kumar, Professor, Electrical and Biomedical Engineering, RMIT University and Pj Radcliffe, Senior Lecturer, Electrical and Computer Engineering, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

In some places 40% of us may have downloaded COVIDSafe. Here’s why the government should share what it knows


Robert Slonim, University of Sydney

It’s 18 days since the government launched its digital contact-tracing app COVIDSafe. The latest figure we have for downloads is 5.4 million, on May 8, about 29% of smartphone users aged 14 and over.

My own mini-survey suggests that in Sydney and Melbourne the takeup could already be 40% – a figure the government has mentioned as a target – while in other places it is much lower.

Oddly, it’s information the government isn’t sharing with us.


Total number of COVIDSafe app users (millions)


Endorse COVIDSafe

The importance of downloading and using the app is growing day by day as we relax restrictions. We are able to see what has happened in countries such as South Korea that have relaxed restrictions and then experienced a second wave.

5.4 million Australians after 13 days is a promising start.

As can be seen in the above graph produced by my colleague Demetris Christodoulou and me, 5.4 million downloads represents about 28.7% of Australians with smartphones.




Read more:
Chief Medical Officer Brendan Murphy predicts more than 50% take-up of COVID tracing app


It compares favourably to the 22.4% of Singaporeans with smartphones who downloaded their app within 13 days of its launch.

But the government is only making public a single figure indicating “total” downloads. It would be far more useful if it provided disaggregated community, city and state level data, and below, I attempt to fill the breach.

Letting us know more about which communities are downloading the app would help with health, motivation and transparency.

Health

Knowledge about potentially-dramatic variations in where the app was being downloaded could help guide policy.

Hypothetically speaking, if 70% of Melbourne’s smartphone users had downloaded the app but only 20% of Adelaide’s users, this could have distinct implications for the ability to successfully trace COVID-19 outbreaks in the respective cities and for the right amount of easing of restrictions in each city.

It could also help residents of those cities make more informed decisions about their own safety, such as whether and how to shop and whether to wear a mask.

Motivation

While COVIDSafe originally generated more than 500,000 daily downloads, the number has fallen to less than 100,000, suggesting that new efforts to motivate more downloads is urgently needed.

Providing geographical details could energise downloads in three ways.

First, people often feel enormous pride when their community steps up to help others. Knowing how well the community is doing is likely to motivate more people to help.




Read more:
COVIDSafe tracking app reviewed: the government delivers on data security, but other issues remain


Second, knowing how well other communities are doing can be a powerful incentive to catch up; few people want to be in the community that isn’t doing its part.

Third, if state leaders make decisions about relaxing restrictions partly on the basis of local downloads, community members will see a direct connection between downloading the app and the freedoms that will be available to them.

Transparency

The government’s appeal to download the app is built around trust.

It has asked us to trust it by downloading the app. In return it should trust us with better information.

People in Adelaide, Alice Springs, Brisbane, Cairns, Canberra, Darwin, Geelong, the Gold Coast, Hobart, Launceston, Melbourne, Newcastle, Perth, Sydney, Townsville, Wollongong, rural communities and other places deserve access to information the government already has that could help them make better choices.

The sort of data authorities are keeping to themselves

Given the lack of transparency to date, I conducted my own online survey among 876 residents of Sydney, Melbourne and regional communities with less than 50,000 people.

My survey results, run with a sample of people using the online survey platform PureProfile, indicate the proportion of people who had downloaded the app by May 11 was 50.5% in Sydney, 44.0% in Melbourne and 36.1% in less populated communities.

Controlling for age and gender, there was no significant difference between downloads in Sydney and Melbourne. Both were significantly higher than rural communities.




Read more:
Contact tracing apps: a behavioural economist’s guide to improving uptake


Restricting the responses to people who have a mobile phone that is capable of downloading the app, the proportion of downloads increases to 53.8% in Sydney, 47.8% in Melbourne and 41.2% in less populated communities. An extra 7.2%, 6.9% and 5.7% of respondents said they would either definitely or probably download the app in the next week.

This survey evidence indicates that there are stark regional differences in the downloads, and that although the national level of downloads is about 29%, some locations such as Sydney and Melbourne may have already surpassed (or will soon supass) the 40% government stated target.

Of course the government shouldn’t rely these survey results, because it’s got the actual information. It is time it shared the detailed download information it has with us, both to reciprocate our trust and let us make more informed decisions.The Conversation

Robert Slonim, Professor of Economics, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

70% of people surveyed said they’d download a coronavirus app. Only 44% did. Why the gap?



Shutterstock

Simon J Dennis, University of Melbourne; Amy Perfors, UCLA School of Medicine; Daniel R. Little, University of Melbourne; Joshua P. White, University of Melbourne; Lewis Mitchell, University of Adelaide; Nic Geard, University of Melbourne; Paul M. Garrett, University of Melbourne, and Stephan Lewandowsky, University of Bristol

In late March, we posed a hypothetical scenario to a sample of Australians, asking if they would download a contact tracing app released by the federal government; 70% responded in favour.

But a more recent survey, following the release of COVIDSafe, revealed only 44% of respondents had downloaded it.

The Australian government’s COVIDSafe app aims to help reduce the spread of COVID-19 and let us all return to normal life. But this promise depends on how many Australians download and use the app. The minimum required uptake has been variously estimated at 40-60% of the population.

Our ongoing research, led by the Complex Human Data Hub of the University of Melbourne’s School of Psychological Sciences, surveyed the Australian public to understand their opinions and use of the COVIDSafe app, and other possible government tracking technologies.

Our research is helping us understand the conditions under which Australians will accept these technologies, and what’s holding them back.

Is there community support for COVIDSafe?

COVIDSafe uses Bluetooth to establish an anonymous contact registry of who a user has been close to, and for how long. If that user tests positive for COVID-19, they can voluntarily upload their contact registry to a central data store accessible only by state and territory health officials. Human contact tracers then alert those at risk and advise them on appropriate isolation measures.




Read more:
Explainer: what is contact tracing and how does it help limit the coronavirus spread?


Gaining broad community support for COVIDsafe requires the app’s perceived public health benefits to outweigh concerns of personal privacy, security and potential risk of harm.

As of May 7, from a sample of 536 survey participants, 44% reported having downloaded the COVIDSafe app. Promisingly, another 17% said they had not, but planned to.

We also asked all our respondents what technology they thought COVIDSafe used. Only 60% correctly responded with “Bluetooth”. Others responded with “location data” (19%), “mobile phone towers” (5%), or that they did not know (16%). This breakdown differed between people who had downloaded the app and those who had not, as shown below.

Why are people opting in?

For those who downloaded COVIDSafe, most reported doing so to monitor others’ health (28%), their own health (19%), and in the hope of returning to normal activities sooner (18%). The least motivating factor was “to help the economy” (14%).

Most people who had not downloaded the app said they were weighing the pros and cons (22%), had not had time (19%) or had technical issues (12%). A small number were waiting for legislation that stipulated how the data could be used (6%).

This may be good news for the government, as many of these reasons are relatively straightforward to address.

Of those who reported they would not download the app, privacy was the main concern (31%).




Read more:
The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


Downloads does not equal usage

Whether those who download COVIDSafe are using it properly will largely determine its effectiveness.

Of those who had downloaded COVIDSafe, 90% said they had registered and kept Bluetooth switched on either at all times (77%) or when they left home (15%). Also, 58% said they had tried to share the app with others – helping to increase the rate of uptake.

Yet, there remains some doubt as to whether turning Bluetooth on is sufficient for the app to work productively on iPhones.
According to app developers, COVIDSafe works best on iPhones when the app is open, on the front screen (foreground), and the phone is unlocked.

But since these iPhone-related issues can be fixed (albeit potentially with some level of difficulty), it would be worthwhile for the government to invest in this.

International comparisons

Before the release of COVIDSafe, our research also tracked social support for similar apps and tracking technologies in other countries, including the UK, US, Taiwan and Germany.

We asked respondents about two hypothetical scenarios of government tracking.

The first scenario was similar to Australia’s COVIDSafe app rollout. In it, people were asked to download a voluntary government tracking app allowing them to be contacted if they had been exposed to COVID-19. In this scenario, 70% of our respondents said they would download the app.

The second scenario was less voluntary, wherein all people with a mobile phone had their location tracked. Governments would use the data to trace contacts, locate people who were violating lockdown orders and enforce restrictions with fines and arrests, if necessary. Interestingly, in this scenario even more people (79%) said they would download the app. If people could opt out, 92% indicated they would support the policy.

Importantly, these scenarios were completely hypothetical at the time, which may account for the intention-behaviour gap. That is, the gap between people’s values and attitudes, and their actual actions.

So, while 70% of people in our first survey said they would download a hypothetical government app, a later survey showed only 44% had actually downloaded COVIDSafe after its release.

This graphs shows the proportion of participants who indicated they would download a voluntary government app (in green), and who found mandatory tracking through telecommunications companies acceptable (purple) in Taiwan, Australia, UK, Germany, and the US under various situations. ‘Sunset’ refers to a sunset clause, in which governments legislate promises to stop tracking and delete the associated data within six months. ‘Local data storage’ refers to when tracking data is stored on a user’s device, rather than a central repository. This data was collected prior to the announcement of the COVIDSafe app.

Australians showed high levels of support for both scenarios, particularly in comparison to other western democracies, such as the UK and the US.

An evolving situation

Prime Minister Scott Morrison has repeatedly linked COVIDSafe’s uptake to a potential easing of lockdown restrictions. But more recently, federal defence minister Marise Payne said the app’s uptake wouldn’t be a deciding factor for when restrictions were lifted.

When asked if the government should use the app’s uptake levels to decide when restrictions should be lifted, only 51% of our survey participants responded “yes”.

Overall, our data show Australians are generally accepting of the use of government tracking technologies to combat the COVID-19 emergency. However, only time will tell how this translates to real-world uptake of the COVIDSafe app.

Detailed results of the survey data from Australia, as well as the UK, US, Spain, Switzerland, Germany, and Taiwan, are continually being reported here.The Conversation

Simon J Dennis, Director of Complex Human Data Hub and Professor of Psychology, University of Melbourne; Amy Perfors, Associate Professor, UCLA School of Medicine; Daniel R. Little, Associate Professor in Mathematical Psychology, University of Melbourne; Joshua P. White, Research Assistant – Complex Human Data Hub, Melbourne School of Psychological Sciences, University of Melbourne; Lewis Mitchell, Senior Lecturer in Applied Mathematics, University of Adelaide; Nic Geard, Senior Lecturer, School of Computing and Information Systems, University of Melbourne; Senior Research Fellow, Doherty Institute for Infection and Immunity, University of Melbourne; Paul M. Garrett, Post Doctoral Research Fellow, University of Melbourne, and Stephan Lewandowsky, Chair of Cognitive Psychology, University of Bristol

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Contact tracing apps are vital tools in the fight against coronavirus. But who decides how they work?


Seth Lazar, Australian National University and Meru Sheel, Australian National University

Last week the head of Australia’s Digital Transformation Agency, Randall Brugeaud, told a Senate committee hearing an updated version of Australia’s COVIDSafe contact-tracing app would soon be released. That’s because the current version doesn’t work properly on Apple phones, which restrict background broadcasting of the Bluetooth signals used to tell when phones have been in close proximity.

For Apple to allow the app the Bluetooth access it requires to work properly, the new version will have to comply with a “privacy-preserving contact tracing” protocol designed by Apple and Google.

Unfortunately, the Apple/Google protocol supports a different (and untested) approach to contact tracing. It may do a better job of preserving privacy than the current COVIDSafe model, but has some public health costs.

And, importantly, the requirement to comply with this protocol takes weighty decisions away from a democratically elected government and puts them in the hands of tech companies.

A difficult transition

Both COVIDSafe and the new Apple/Google framework track exposure in roughly the same way. They broadcast a “digital handshake” to nearby phones, from which it’s possible to infer how close two users’ devices were, and for how long.

If the devices were closer than 1.5m for 15 minutes or more, that’s considered evidence of “close contact”. To stop the spread of COVID-19, the confirmed close contacts of people who test positive need to self-isolate.

The differences between COVIDSafe’s current approach and the planned Apple/Google framework are in the architecture of the two systems, and to whom they reveal sensitive information. COVIDSafe’s approach is “centralised” and uses a central database to collect some contact information, whereas Apple and Google’s protocol is completely “decentralised”. For the latter, notification of potential exposure to someone who has tested positive is carried out between users alone, with no need for a central database.




Read more:
The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy


This provides a significant privacy benefit: a central database would be a target for attackers, and could potentially be misused by law enforcement.

Protecting COVIDSafe’s central database, and ensuring “COVID App Data” is not misused has been the task of the draft legislation currently being considered. However, if the Apple/Google framework is adopted as planned, much of that legislation will become redundant, as there will be no centralised database to protect. Also, since data on users’ devices will be encrypted and inaccessible to health authorities, there’s no risk of it being misused.




Read more:
The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


For COVIDSafe to comply with the new Apple/Google framework, it would need to be completely rewritten, and the new app would most likely not be interoperable with the current version. This means we’d either have two systems running in parallel, or we’d have to ensure that everyone updates.

Less information for contact tracers

The Apple/Google approach strictly limits the amount of information shared with all parties, including traditional contact tracers.

When a user’s “risk score” exceeds a threshold the app will send them a pop-up. The only information revealed to the user and health authorities will be the date of exposure, its duration, and the strength of the Bluetooth signal at the time. The app would not reveal, to anyone, precisely when a potentially risky encounter occurred, or to whom the user was exposed.

This, again, has privacy benefits, but also public health costs. This kind of “exposure notification” (as Apple and Google call it, though proximity notification might be more accurate) can be used to supplement traditional contact tracing, but it can’t be integrated into it, because it doesn’t entrust contact tracers with sensitive information.

Benefits of traditional methods

As experts have already shown, duration and strength of Bluetooth signals is weak evidence of potentially risky exposure, and can result in both false positives and false negatives.

COVIDSafe’s current approach entrusts human contact tracers with more data than the Apple/Google framework allows – both when, and to whom, the at-risk person was exposed. This enables a more personalised risk assessment, with potentially fewer errors. Contact tracers can help people recall encounters they may otherwise forget, and provide context to information given by the app.

For example, the knowledge that a possible close contact happened when both parties were wearing personal protective equipment might help avoid a false positive. Similarly, learning that someone who tested positive had a close contact with a user, who was with friends who weren’t running the app at the time, might enable us to alert those friends, and so avoid a false negative.

In addition, just having the message come from a human rather than a pop-up might make people more likely to actually self-isolate; we only control the spread if we actually self-isolate when instructed. And, by providing all this data to public health authorities, COVIDSafe’s current approach also grants experts epidemiological insights into the disease.

The two approaches are also supported by different evidence. Apple and Google’s decentralised exposure notification method has never been tried in a pandemic, and is supported by evidence from simulations. However, app-enhanced contact tracing akin to what COVIDSafe does (except using GPS, not Bluetooth) was road-tested in the Ebola outbreak in West Africa, with promising (though inconclusive) results.

Who should decide?

So, should the Australian government comply with Apple and Google’s privacy “laws” and design a new app that’s different from COVIDSafe? Or should Apple update its operating system so COVIDSafe works effectively in the background? Perhaps more importantly, who should decide?

If Apple and Google’s approach achieved the same public health goals as COVIDSafe, but better protected privacy, then – sunk costs notwithstanding – Australia should design a new app to fit with their framework. As we’ve seen, though, the two approaches are genuinely different, with different public health benefits.

If COVIDSafe were likely to lead to violations of fundamental privacy rights, then Apple would be morally entitled to stick to their guns, and continue to restrict it from working in the background. But the current COVIDSafe draft legislation – while not perfect – adequately addresses concerns about how, and by whom, data is collected and accessed. And while COVIDSafe has security flaws, they can be fixed.




Read more:
The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


Decisions on how to weigh values like privacy and public health should be based on vigorous public debate, and the best advice from experts in relevant fields. Disagreement is inevitable.

But in the end, the decision should be made by those we voted in, and can vote out if they get it wrong. It shouldn’t be in the hands of tech executives outside of the democratic process.The Conversation

Seth Lazar, Professor, Australian National University and Meru Sheel, Epidemiologist | Senior Research Fellow, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How safe is COVIDSafe? What you should know about the app’s issues, and Bluetooth-related risks



Shutterstock

James Jin Kang, Edith Cowan University and Paul Haskell-Dowland, Edith Cowan University

The Australian government’s COVIDSafe app has been up and running for almost a fortnight, with more than five million downloads.

Unfortunately, since its release many users – particularly those with iPhones – have been in the dark about how well the app works.

Digital Transformation Agency head Randall Brugeaud has now admitted the app’s effectiveness on iPhones “deteriorates and the quality of the connection is not as good” when the phone is locked, and the app is running in the background.

There has also been confusion regarding where user data is sent, how it’s stored, and who can access it.

Conflicts with other apps

Using Bluetooth, COVIDSafe collects anonymous IDs from others who are also using the app, assuming you come into range with them (and their smartphone) for a period of at least 15 minutes.

Bluetooth must be kept on at all times (or at least turned on when leaving home). But this setting is specifically advised against by the Office of the Australian Information Commissioner.

It’s likely COVIDSafe isn’t the only app that uses Bluetooth on your phone. So once you’ve enabled Bluetooth, other apps may start using it and collecting information without your knowledge.

Bluetooth is also energy-intensive, and can quickly drain phone batteries, especially if more than one app is using it. For this reason, some may be reluctant to opt in.

There have also been reports of conflicts with specialised medical devices. Diabetes Australia has received reports of users encountering problems using Bluetooth-enabled glucose monitors at the same time as the COVIDSafe app.

If this happens, the current advice from Diabetes Australia is to uninstall COVIDSafe until a solution is found.

Bluetooth can still track your location

Many apps require a Bluetooth connection and can track your location without actually using GPS.

Bluetooth “beacons” are progressively being deployed in public spaces – with one example in Melbourne supporting visually impaired shoppers. Some apps can use these to log locations you have visited or passed through. They can then transfer this information to their servers, often for marketing purposes.

To avoid apps using Bluetooth without your knowledge, you should deny Bluetooth permission for all apps in your phone’s settings, and then grant permissions individually.

If privacy is a priority, you should also read the privacy policy of all apps you download, so you know how they collect and use your information.

Issues with iPhones

The iPhone operating system (iOS), depending on the version, doesn’t allow COVIDSafe to work properly in the background. The only solution is to leave the app running in the foreground. And if your iPhone is locked, COVIDSafe may not be recording all the necessary data.

You can change your settings to stop your iPhone going into sleep mode. But this again will drain your battery more rapidly.

Brugeaud said older models of iPhones would also be less capable of picking up Bluetooth signals via the app.

It’s expected these issues will be fixed following the integration of contact tracing technology developed by Google and Apple, which Brugeaud said would be done within the next few weeks.




Read more:
The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


Vulnerabilities to data interception

If a user tests positive for COVID-19 and consents to their data being uploaded, the information is then held by the federal government on an Amazon Web Services server in Australia.

Data from the app is stored on a user’s device and transmitted in an encrypted form to the server. Although it’s technically possible to intercept such communications, the data would still be encrypted and therefore offer little value to an attacker.

The government has said the data won’t be moved offshore or made accessible to US law enforcement. But various entities, including Australia’s Law Council, have said the privacy implications remain murky.

That said, it’s reassuring the Amazon data centre (based in Sydney) has achieved a very high level of security as verified by the Australian Cyber Security Centre.

Can the federal government access the data?

The federal government has said the app’s data will only be made available to state and territory health officials. This has been confirmed in a determination under the Biosecurity Act and is due to be implemented in law.

Federal health minister Greg Hunt said:

Not even a court order during an investigation of an alleged crime would be allowed to be used [to access the data].

Although the determination and proposed legislation clearly define the who and how of access to COVIDSafe data, past history indicates the government may not be best placed to look after our data.

It seems the government has gone to great lengths to promote the security and privacy of COVIDSafe. However, the government commissioned the development of the app, so someone will have the means to obtain the information stored within the system – the “keys” to the vault.

If the government did covertly obtain access to the data, it’s unlikely we would find out.

And while contact information stored on user devices is deleted on a 21-day rolling basis, the Department of Health has said data sent to Amazon’s server will “be destroyed at the end of the pandemic”. It’s unclear how such a date would be determined.

Ultimately, it comes down to trust – something which seems to be in short supply.




Read more:
The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy


The Conversation


James Jin Kang, Lecturer, Computig and Security, Edith Cowan University and Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The COVIDSafe bill doesn’t go far enough to protect our privacy. Here’s what needs to change


Katharine Kemp, UNSW and Graham Greenleaf, UNSW

The Australian government will need to correct earlier misstatements and improve privacy protections to gain the trust of the millions of Australians being called on to download the COVIDSafe contact tracing app.

The draft Privacy Amendment (Public Health Contact Information) Bill 2020, or the “COVIDSafe bill”, released yesterday, is the first step towards parliamentary legislation providing privacy protections for users of the app.

The COVIDSafe bill includes some significant improvements on the protections offered by federal health minister Greg Hunt’s current determination under the Biosecurity Act, which put rules in place to encourage uptake of the app. However, the bill falls short on other substantial concerns.

Improvements incorporated in the bill

The COVIDSafe bill includes several amendments to the privacy protections originally set out in the determination, which the legislation is intended to replace.

The bill, like the determination, would make it illegal to gather or use data collected by the app for purposes other than those specified. Such an offence would be punishable by up to five years in prison.

Importantly, the bill also permits individuals to take some enforcement action on their own behalf if the privacy protections are breached, rather than relying on the government to bring criminal proceedings. It does this by making a breach of those protections an “interference with privacy” under the Privacy Act. This means users can make a complaint to the federal privacy commissioner.

The bill also improves the kind of consent needed to upload a user’s list of contacts to the central data store, if the user tests positive for COVID-19. Instead of allowing anyone with control of a mobile phone to consent, the bill requires consent from the actual registered COVIDSafe user.

The legislation will also apply to state and territory health officials to cover data accessed for contact tracing purposes, in case they misuse it.




Read more:
The COVIDSafe app was just one contact tracing option. These alternatives guarantee more privacy


Not 1.5 metres, not 15 minutes

A crucial problem with the bill is it allows the government to collect much more personal data than is necessary for contact tracing.

Just before the app’s release, federal services minister Stuart Roberts said the app would only collect data of other app users within 1.5 metres, for at least 15 minutes. He also said when a user tests positive the app would allow the user to consent to the upload of only those contacts.

Neither of these statements is true.

According to the Privacy Impact Assessment of COVIDSafe, the app collects and – with consent of a user who tests positive – uploads to the central data store, data about all other users who came within Bluetooth signal range even for a minute within the preceding 21 days.

While the Department of Health more recently said it would prevent state and territory health authorities from accessing contacts other than those that meet the “risk parameters”, the bill includes no data collection or use restrictions based on the distance or duration of contact.

The government should correct its misstatements and minimise the data collected and decrypted to that which is necessary, to the extent that is technically possible.

An overly narrow definition of protected data

The privacy protections in the bill only apply to certain data. And the definition of that data does not capture critical personal data created and used in the process of COVIDSafe contact tracing.

The bill defines “COVID app data” as data collected or generated through the operation of the app which has been stored on a mobile phone or device. This would include the encrypted contacts stored on a user’s phone.

But if the user tests positive and uploads those encrypted contacts to the national data store, the decrypted records of their contacts over the last 21 days do not clearly fall within that definition. Data transformed or derived from that data by state and territory health officers would also fall outside the definition.

“COVID app data” should be re-defined to expressly include these types of data.

No source code

Ministers have said COVIDSafe’s source code, or at least the parts of it which do not pose “security issues”, would be made available within a fortnight after the app’s release. Yet, there is no sign of this.

The full source code should be made public at least a week prior to the COVIDSafe Act being enacted so experts can identify weaknesses in privacy protections.

The bill also fails to provide any guarantee of independent scientific advice on whether the app is continuing to be of practical benefit, or should be terminated.

Loopholes in the rules against coercion

The bill contains some good protections against coercing people to download or use the COVIDSafe app, but these need to be strengthened, by preventing requirements to disclose installation of the app, and discriminatory conditions. This is especially necessary given various groups, including chambers of commerce, have already proposed (illegal) plans to make participation or entry conditional on app usage.

Some behavioural economists have proposed making government payments, tax break or other financial rewards dependent on individuals using the app. The bill should make clear that no discount, payment or other financial incentive may be conditional on a person downloading or using the app.

The government must abide by its promise that use of the COVIDSafe app is voluntary. Coercion or “pseudo-voluntary” agreement should not be used to circumvent this.

‘Google knows everything about you’ doesn’t cut it

Many have argued Australians who do not yet trust the COVIDSafe app should download it anyway since Google, Facebook, Uber or Amazon already “know far more about you”. But the fact that some entities are being investigated for data practices which disadvantage consumers is not a reason to diminish the need for privacy protections.

The harms from government invasions of privacy have even more dramatic and immediate impacts on our liberty.

Parliament will debate the COVIDSafe Bill in the sitting expected to start May 12, and a Senate Committee will continue to investigate it. Many are likely to wait for improved protections in the final legislation before making the choice to opt in.




Read more:
Coronavirus contact-tracing apps: most of us won’t cooperate unless everyone does


The Conversation


Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Academic Lead, UNSW Grand Challenge on Trust, UNSW and Graham Greenleaf, Professor of Law and Information Systems, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.