The plight of Afghan security contractors highlights the legal and moral risks of outsourcing war


Anna Powles, Massey UniversityBy first denying and then granting visas to more than 100 Afghan contractors who guarded its embassy in Kabul, Australia has shone a light on the murky world of the private security industry.

According to the lawyer and former army officer representing the security guards, his clients had yet to receive the humanitarian visas and the about-face was merely an attempt by Australian officials “to look like they have done their job when they sat on their hands for so long”.

The Australian case mirrors the British government’s policy reversal concerning 125 Afghan security guards at its Kabul embassy.

They, too, were initially informed they were ineligible for emergency evacuation due to being employed by Canadian private security firm GardaWorld, only for the decision to be overruled late last week.

In both cases, these Afghan contractors have fallen into the shady legal gap between the private security company that employed them locally and the governments that contracted their employers.

As one GardaWorld employee said when he was told his contract would be terminated:

No one asked whether we are safe or not. No one asked whether our lives are in danger or not.

Privatising and outsourcing war

Afghanistan, famously known as “the graveyard of empires”, has been a gravy train for the global private security industry for the past two decades, as the war was increasingly privatised and outsourced.

Under the Trump administration, private security companies with Pentagon contracts numbered nearly 6,000, costing US$2.3 billion (A$3.1 billion) in 2019. When the US military withdrawal began, these private contractors dropped to about 1,400 by July.




Read more:
As the Taliban’s grip on Afghanistan tightens, New Zealand must commit to taking more refugees


Until now, however, private security firms were such a critical element of the war effort that their departure was considered a key factor in the collapse of the Afghan army.

The appeal of these private security contractors lies in their arms-length advantage — they are relatively disposable and carry little political cost. This allows the industry to operate opaquely, with little oversight and even less accountability.

In the case of the Australian embassy guards, it would appear their direct employers have done little to secure their safety. How, then, can these companies and the governments that employ them be held accountable?

Little binding protection

The Montreux Document on Private Military and Security Companies – which reflects inter-governmental consensus that international law applies to private security companies in war zones – requires private security companies “to respect and ensure the welfare of their personnel”. Unfortunately, this is not a binding agreement.

The International Code of Conduct for Private Security Service Providers (ICoCA) – known as “the code” — lays out the responsibilities of private security under international law. It requires signatory companies to:

[…]provide a safe and healthy working environment, recognising the possible inherent dangers and limitations presented by the local environment [and to] ensure that reasonable precautions are taken to protect relevant staff in high-risk or life-threatening operations.




Read more:
The Taliban may have access to the biometric data of civilians who helped the U.S. military


Australia is a signatory to the ICoCA, as are private security companies Gardaworld, Hart International Australia and Hart Security Limited, all of which operate in Afghanistan and have at various times been contracted by the Australian government.

But again, like the Montreux Document, the ICoC is non-binding. However, ICoC Executive Director Jamie Williamson has said:

The situation in Afghanistan is shining a spotlight on the duty of care clients of private security companies have towards local staff and their families […] We expect to see both our government and corporate members ensure the safety and well-being of all private security personnel working on government and other contracts, whatever their nationality.

Still no guarantee of safety

This duty of care now appears to have been extended to those guards who worked for the Australian and British governments in Afghanistan — albeit at the last minute. As one contractor told Australian media, he and his colleagues first applied for protection visas in 2012.

But their safety remains uncertain. The visas do not guarantee safe passage to Kabul’s international airport where evacuation efforts are chaotic. In the past weekend alone, 14 civilians were killed trying to flee the Taliban takeover.

There are also concerns about safe passage through Taliban checkpoints not being properly coordinated by US and NATO
allies, leaving dangerous alternative routes the only option.




Read more:
Where do Afghanistan’s refugees go?


Sheltering until they can safely travel to the airport is also fraught. As one guard explained:

Every day there is news that the Taliban will start a search for each house […] looking for people who have served the army and those who have served the foreign army.

Australia has made a legal and moral commitment to provide refuge to these people. But with the Taliban’s so-called red line of August 31 looming, the window to evacuate them and their families is closing.

And while the global private security companies may have shut up shop in Afghanistan for now, the consequences and human costs associated with outsourcing war linger on.The Conversation

Anna Powles, Senior Lecturer in Security Studies, Massey University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How hackers can use message mirroring apps to see all your SMS texts — and bypass 2FA security


Shutterstock

Syed Wajid Ali Shah, Deakin University; Jongkil Jay Jeong, Deakin University, and Robin Doss, Deakin UniversityIt’s now well known that usernames and passwords aren’t enough to securely access online services. A recent study highlighted more than 80% of all hacking-related breaches happen due to compromised and weak credentials, with three billion username/password combinations stolen in 2016 alone.

As such, the implementation of two-factor authentication (2FA) has become a necessity. Generally, 2FA aims to provide an additional layer of security to the relatively vulnerable username/password system.

It works too. Figures suggest users who enabled 2FA ended up blocking about 99.9% of automated attacks.

But as with any good cybersecurity solution, attackers can quickly come up with ways to circumvent it. They can bypass 2FA through the one-time codes sent as an SMS to a user’s smartphone.

Yet many critical online services in Australia still use SMS-based one-time codes, including myGov and the Big 4 banks: ANZ, Commonwealth Bank, NAB and Westpac.




Read more:
A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?


So what’s the problem with SMS?

Major vendors such as Microsoft have urged users to abandon 2FA solutions that leverage SMS and voice calls. This is because SMS is renowned for having infamously poor security, leaving it open to a host of different attacks.

For example, SIM swapping has been demonstrated as a way to circumvent 2FA. SIM swapping involves an attacker convincing a victims’s mobile service provider they themselves are the victim, and then requesting the victim’s phone number be switched to a device of their choice.

SMS-based one-time codes are also shown to be compromised through readily available tools such as Modlishka by leveraging a technique called reverse proxy. This facilitates communication between the victim and a service being impersonated.

So in the case of Modlishka, it will intercept communication between a genuine service and a victim and will track and record the victims’s interactions with the service, including any login credentials they may use).

In addition to these existing vulnerabilities, our team have found additional vulnerabilities in SMS-based 2FA. One particular attack exploits a feature provided on the Google Play Store to automatically install apps from the web to your android device.

Due to syncing services, if a hacker manages to compromise your Google login credentials on their own device, they can then install a message mirroring app directly onto your smartphone.
Shutterstock

If an attacker has access to your credentials and manages to log into your Google Play account on a laptop (although you will receive a prompt), they can then install any app they’d like automatically onto your smartphone.

The attack on Android

Our experiments revealed a malicious actor can remotely access a user’s SMS-based 2FA with little effort, through the use of a popular app (name and type withheld for security reasons) designed to synchronise user’s notifications across different devices.

Specifically, attackers can leverage a compromised email/password combination connected to a Google account (such as username@gmail.com) to nefariously install a readily-available message mirroring app on a victim’s smartphone via Google Play.

This is a realistic scenario since it’s common for users to use the same credentials across a variety of services. Using a password manager is an effective way to make your first line of authentication — your username/password login — more secure.

Once the app is installed, the attacker can apply simple social engineering techniques to convince the user to enable the permissions required for the app to function properly.

For example, they may pretend to be calling from a legitimate service provider to persuade the user to enable the permissions. After this they can remotely receive all communications sent to the victim’s phone, including one-time codes used for 2FA.

Although multiple conditions must be fulfilled for the aforementioned attack to work, it still demonstrates the fragile nature of SMS-based 2FA methods.

More importantly, this attack doesn’t need high-end technical capabilities. It simply requires insight into how these specific apps work and how to intelligently use them (along with social engineering) to target a victim.

The threat is even more real when the attacker is a trusted individual (e.g., a family member) with access to the victim’s smartphone.

What’s the alternative?

To remain protected online, you should check whether your initial line of defence is secure. First check your password to see if it’s compromised. There are a number of security programs that will let you do this. And make sure you’re using a well-crafted password.

We also recommend you limit the use of SMS as a 2FA method if you can. You can instead use app-based one-time codes, such as through Google Authenticator. In this case the code is generated within the Google Authenticator app on your device itself, rather than being sent to you.

However, this approach can also be compromised by hackers using some sophisticated malware. A better alternative would be to use dedicated hardware devices such as YubiKey.

Hand holds up a YubiKey USB with the text 'Citrix' in the background.
The YubiKey, first developed in 2008, is an authentication device designed to support one-time password and 2FA protocols without having to rely on SMS-based 2FA.
Shutterstock

These are small USB (or near-field communication-enabled) devices that provide a streamlined way to enable 2FA across different services.

Such physical devices need to be plugged into or brought into close proximity of a login device as a part of 2FA, therefore mitigating the risks associated with visible one-time codes, such as codes sent by SMS.

It must be stressed an underlying condition to any 2FA alternative is the user themselves must have some level of active participation and responsibility.

At the same time, further work must be carried out by service providers, developers and researchers to develop more accessible and secure authentication methods.

Essentially, these methods need to go beyond 2FA and towards a multi-factor authentication environment, where multiple methods of authentication are simultaneously deployed and combined as needed.




Read more:
Can I still be hacked with 2FA enabled?


The Conversation


Syed Wajid Ali Shah, Research Fellow, Centre for Cyber Security Research and Innovation, Deakin University; Jongkil Jay Jeong, CyberCRC Research Fellow, Centre for Cyber Security Research and Innovation (CSRI), Deakin University, and Robin Doss, Research Director, Centre for Cyber Security Research and Innovation, Deakin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Calling out China for cyberattacks is risky — but a lawless digital world is even riskier


http://www.shutterstock.com

Alexander Gillespie, University of WaikatoToday’s multi-country condemnation of cyber-attacks by Chinese state-sponsored agencies was a sign of increasing frustration at recent behaviour. But it also masks the real problem — international law isn’t strong or coherent enough to deal with this growing threat.

The coordinated announcement by several countries, including the US, UK, Australia and New Zealand, echoes the most recent threat assessment from the US intelligence community: cyber threats from nation states and their surrogates will remain acute for the foreseeable future.

Joining the chorus against China may be diplomatically risky for New Zealand and others, and China has already described the claims as “groundless and irresponsible”. But there is no doubt the problem is real.

The latest report from New Zealand’s Government Communications Security Bureau (GCSB) recorded 353 cyber security incidents in the 12 months to the middle of 2020, compared with 339 incidents in the previous year.

Given the focus is on potentially high-impact events targeting organisations of national significance, this is likely only a small proportion of the total. But the GCSB estimated state-sponsored attacks accounted for up to 30% of incidents recorded in 2019-20.

Since that report, more serious incidents have occurred, including attacks on the stock-exchange and Waikato hospital. The attacks are becoming more sophisticated and inflicting greater damage.

Globally, there are warnings that a major cyberattack could be as deadly as a weapon of mass destruction. The need to de-escalate is urgent.

Global solutions missing

New Zealand would be relatively well-prepared to cope with domestic incidents using criminal, privacy and even harmful digital communications laws. But most cybercrime originates overseas, and global solutions don’t really exist.

In theory, the attacks can be divided into two types — those by criminals and those by foreign governments. In reality, the line between the two is blurred.

Dealing with foreign criminals is slightly easier than combating attacks by other governments, and Prime Minister Jacinda Ardern has recognised the need for a global effort to fight this kind of cybercrime.




Read more:
With cyberattacks growing more frequent and disruptive, a unified approach is essential


To that end, the government recently announced New Zealand was joining the Council of Europe’s Convention on Cybercrime, a global regime signed by 66 countries based on shared basic legal standards, mutual assistance and extradition rules.

Unfortunately, some of the countries most often suspected of allowing international cybercrime to be committed from within their borders have not signed, meaning they are not bound by its obligations.

That includes Russia, China and North Korea. Along with several other countries not known for their tolerance of an open, free and secure internet, they are trying to create an alternative international cybercrime regime, now entering a drafting process through the United Nations.

Cyberattacks as acts of war

Dealing with attacks by other governments (as opposed to criminals) is even harder.

Only broad principles exist, including that countries refrain from the threat or use of force against the territorial integrity or political independence of any state, and that they should behave in a friendly way towards one another. If one is attacked, it has an inherent right of self-defence.




Read more:
Improving cybersecurity means understanding how cyberattacks affect both governments and civilians


Malicious state-sponsored cyber activity involving espionage, ransoms or breaches of privacy might qualify as unfriendly and in bad faith, but they are not acts of war.

However, cyberattacks directed by other governments could amount to acts of war if they cause death, serious injury or significant damage to the targeted state. Cyberattacks that meddle in foreign elections may, depending on their impact, dangerously undermine peace.

And yet, despite these extreme risks, there is no international convention governing state-based cyberattacks in the ways the Geneva Conventions cover the rules of warfare or arms control conventions limit weapons of mass destruction.

Vladimir Putin shaking hands with Joe Biden
Drawing a red line on cybercrime: US President Joe Biden meets Russian President Vladimir Putin in Geneva in June.
GettyImages

Risks of retaliation

The latest condemnation of Chinese-linked cyberattacks notwithstanding, the problem is not going away.

At their recent meeting in Geneva, US President Joe Biden told his Russian counterpart, Vladimir Putin, the US would retaliate against any attacks on its critical infrastructure. A new US agency aimed at countering ransomware attacks would respond in “unseen and seen ways”, according to the administration.

Such responses would be legal under international law if there were no alternative means of resolution or reparation, and could be argued to be necessary and proportionate.

Also, the response can be unilateral or collective, meaning the US might call on its friends and allies to help. New Zealand has said it is open to the proposition that victim states can, in limited circumstances, request assistance from other states to apply proportionate countermeasures against someone acting in breach of international law.




Read more:
Ransomware, data breach, cyberattack: What do they have to do with your personal information, and how worried should you be?


A drift towards lawlessness

But only a month after Biden drew his red line with Putin, another massive ransomware attack crippled hundreds of service providers across 17 countries, including New Zealand schools and kindergartens.

The Russian-affiliated ransomware group REvil that was probably behind the attacks mysteriously disappeared from the internet a few weeks later.




Read more:
Cyber Cold War? The US and Russia talk tough, but only diplomacy will ease the threat


Things are moving fast and none of it is very reassuring. In an interconnected world facing a growing threat from cyberattacks, we appear to be drifting away from order, stability and safety and towards the darkness of increasing lawlessness.

The coordinated condemnation of China by New Zealand and others has considerably upped the ante. All parties should now be seeking a rules-based international solution or the risk will only grow.The Conversation

Alexander Gillespie, Professor of Law, University of Waikato

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How safe are your data when you book a COVID vaccine?


Shutterstock

Joan Henderson, University of Sydney and Kerin Robinson, La Trobe UniversityThe Australian government has appointed the commercial company HealthEngine to establish a national booking system for COVID-19 vaccinations.

Selected through a Department of Health limited select tender process, the platform is being used by vaccine providers who don’t have their own booking system.

However, HealthEngine has a track record of mishandling confidential patient information.

Previous problems

In 2019 the Australian Competition and Consumer Commission took HealthEngine to court for allegedly skewing reviews and ratings of medical practices on its platform and selling more than 135,000 patients’ details to private health insurance brokers.

The Federal Court fined HealthEngine A$2.9 million in August 2020, just eight months ago.

Department of Health associate secretary Caroline Edwards told a Senate hearing the issues were “historical in nature, weren’t intentional and did not involve the sharing of clinical or medical related information”.

How might the alleged misconduct, which earned HealthEngine A$1.8 million, be considered “historical in nature” and “not intentional”?

Edwards added that HealthEngine had strengthened its privacy and security processes, following recommendations in the ACCC’s digital platforms inquiry report. Regarding the new contract, she said:

[…] the data available to HealthEngine through what it’s been contracted to do does not include any clinical information or any personal information over what’s required for people to book.

That’s somewhat reassuring, considering the larger amount of information usually requested from patients booking an appointment (as per HealthEngine’s current Privacy Policy).

The list of personal information HealthEngine may collect from patients booking an appointment with a health professional.
Screenshot

Importantly, HealthEngine then owns this information. This raises an important question: why is so much personal information requested just to book an ordinary appointment?

A need for accessible information

While using HealthEngine to book a vaccination is not mandatory, individual practices will determine whether patients can make appointments over the phone, are directed to use HealthEngine’s platform, or another existing platform.

Personal details currently requested through HealthEngine’s vaccination booking system are:

HealthEngine’s Privacy Policy for COVID-19 vaccination bookings.
screenshot

This list is substantially shorter than the one concerning non-COVID related bookings. That said, there’s still more information being gathered than would be required for the sole purpose of arranging a patient’s vaccination.

What is the justification for this system to collect data about patients’ non-COVID medical and health services, or the pages they visit?

A representative from the Department of Health told The Conversation that all patient data collected through the COVID vaccination booking system was owned by the department, not HealthEngine. But what need would the department have to collect web analytics data about what sites a patient visits?

An underlying administrative principle of any medical appointment platform is that it should collect the minimum amount of data needed to fulfil its purpose.

Also, HealthEngine’s website reveals the company has, appropriately, created an additional privacy policy for its COVID-19 vaccination booking platform. However, this is currently embedded within its pre-existing policy. Therefore it’s unlikely many people will find, let alone read it.

For transparency, the policy should be easy to find, clearly labelled and presented as distinct from HealthEngine’s regular policies. A standalone page would be feasible, given the value of the contract is more than A$3.8 million.

What protections are in place?

Since the pandemic began, concerns have been raised regarding the lack of clear information and data privacy protection afforded to patients by commercial organisations.

Luckily, there are safeguards in place to regulate how patient data are handled. The privacy of data generated through health-care provision (such as in general practices, hospitals, community health centres and pharmacies) is protected under state and territory or Commonwealth laws.

Data reported (on a compulsory basis) by vaccinating clinicians to the Australian Immunisation Register fall within the confines of the Australian Immunisation Register Act 2015 and its February 2021 amendment.




Read more:
Queensland Health’s history of software mishaps is proof of how hard e-health can be


Also, data collected through the Department of Health’s vaccination registration system are legally protected under the Privacy Act 1988, as are data collected via HealthEngine’s government-approved COVID-19 vaccination booking system.

But there’s still a lack of clarity regarding what patients are being asked to consent to, the amount of information collected and how it’s handled. It’s a critical legal and ethical requirement patients have the right to consent to the use of their personal information.

If the privacy policy of a booking system is unclear, this presents risk for patients who have challenges with the English language, literacy, or are potentially distracted by pain or anxiety while making an appointment.
Shutterstock

Gaps in our knowledge

As health information managers, we had further questions regarding the government’s decision to appoint HealthEngine as a national COVID-19 vaccination booking provider. The Conversation put these questions to HealthEngine, which forwarded them to the Department of Health. They were as follows.

  1. Is there justification for the rushed outsourcing of the national appointment platform, given the number of vaccine recipients whose data will be collected?
  2. How did the department’s “limited select tender” process ensure equity?
  3. Who will own data collected via HealthEngine’s optional national booking system?
  4. What rights will the “owner” of the data have to give third-party access via the sharing or selling of data?
  5. What information will vaccine recipients be given on their right to not use HealthEngine’s COVID-19 vaccination booking system (or any appointment booking system) if they’re uncomfortable providing personal information to a commercial entity?
  6. How will these individuals be reassured they may still receive a vaccine, should they not wish to use the system?

In response, a department representative provided information already available online here, here, here and here. They gave no clarification about how patients might be guided if they’re directed to the HealthEngine platform but don’t want to use it.

They advised the data collected by HealthEngine:

can not be used for secondary purposes, and can only be disclosed to third-party entities as described in HealthEngine’s tailored Privacy Policy and Collection Notice, as well as the department’s Privacy Notice.

But according to HealthEngine’s privacy policy, this means patient data could still be provided to other health professionals a patient selects, and de-identified information given to the Department of Health. The policy states HealthEngine may also disclose patients’ personal information to:

  • third-party IT and software providers such as Adobe Analytics
  • professional advisers such as lawyers and auditors, for the purpose of providing goods or services to HealthEngine
  • courts, tribunals and law enforcement, as required by law or to defend HealthEngine’s legal rights and
  • others parties, as consented to by the patient, or as required by law.

Ideally, the answers to our questions would have helped shed light on the extent to which patient privacy was considered in the government’s decision. But inconsistencies between what is presented in the privacy policies and the Department of Health’s response have not clarified this.




Read more:
The government is hyping digitalised services, but not addressing a history of e-government fails


The Conversation


Joan Henderson, Senior Research Fellow (Hon). Editor, Health Information Management Journal (HIMJ), University of Sydney and Kerin Robinson, Adjunct Associate Professor, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Not just complacency: why people are reluctant to use COVID-19 contact-tracing apps



Mark Coote/Bloomberg via Getty Images

Farkhondeh Hassandoust, Auckland University of Technology

This week’s announcement of two new COVID-19 vaccine pre-purchase deals is encouraging, but doesn’t mean New Zealanders should become complacent about using the NZ COVID Tracer app during the summer holidays.

The immunisation rollout won’t start until the second quarter of 2021, and the government is encouraging New Zealanders to continue using the app, including the recently upgraded bluetooth function, as part of its plan to manage the pandemic during the holiday period.




Read more:
How to keep COVID-19 at bay during the summer holidays — and help make travel bubbles a reality in 2021


During the past weeks, the number of daily scans has dropped significantly, down from just over 900,000 scans per day at the end of November to fewer than 400,000 in mid-December.

With no active cases of COVID-19 in the commmunity, complacency might be part of the issue in New Zealand, but as our research in the US shows, worries about privacy and trust continue to make people reluctant to use contact-tracing apps.

Concerns about privacy and surveillance

We surveyed 853 people from every state in the US to identify the factors promoting or inhibiting their use of contact-tracing applications. Our survey reveals two seemingly contradictory findings.

Individuals are highly motivated to use contact-tracing apps, for the sake of their own health and that of society as a whole. But the study also found people are concerned about privacy, social disapproval and surveillance.

The findings suggest people’s trust in the data collectors is dependent on the technology features of these apps (for example, information sensitivity and anonymity) and the privacy protection initiatives instigated by the authorities.

With the holiday season just around the corner — and even though New Zealand is currently free of community transmission — our findings are pertinent. New Zealanders will travel more during the summer period, and it is more important than ever to use contact-tracing apps to improve our chances of getting on top of any potential outbreaks as quickly as possible.

How, then, to overcome concerns about privacy and trust and make sure New Zealanders use the upgraded app during summer?

The benefits of adopting contact-tracing apps are mainly in shared public health, and it is important these societal health benefits are emphasised. In order to quell concerns, data collectors (government and businesses) must also offer assurance that people’s real identity will be concealed.

It is the responsibility of the government and the office of the Privacy Commissioner to ensure all personal information is managed appropriately.




Read more:
An Australia–NZ travel bubble needs a unified COVID contact-tracing app. We’re not there


Transparency and data security

Our study also found that factors such as peer and social influence, regulatory pressures and previous experiences with privacy loss underlie people’s readiness to adopt contact-tracing apps.

The findings reveal that people expect regulatory protection if they are to use contact-tracing apps. This confirms the need for laws and regulations with strict penalties for those who collect, use, disclose or decrypt collected data for any purpose other than contact tracing.

The New Zealand government is working with third-party developers to complete the integration of other apps by the end of December to enable the exchange of digital contact-tracing information from different apps and technologies.

The Privacy Commissioner has already endorsed the bluetooth upgrade of the official NZ COVID Tracer app because of its focus on users’ privacy. And the Ministry of Health aims to release the source code for the app so New Zealanders can see how their personal data has been managed.

Throughout the summer, the government and ministry should emphasise the importance of using the contact-tracing app and assure New Zealanders about the security and privacy of their personal data.

Adoption of contact-tracing apps is no silver bullet in the battle against COVID-19, but it is a crucial element in New Zealand’s collective public health response to the global pandemic.The Conversation

Farkhondeh Hassandoust, Lecturer, Auckland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?



Paul Haskell-Dowland, Author provided

Paul Haskell-Dowland, Edith Cowan University and Brianna O’Shea, Edith Cowan University

Passwords have been used for thousands of years as a means of identifying ourselves to others and in more recent times, to computers. It’s a simple concept – a shared piece of information, kept secret between individuals and used to “prove” identity.

Passwords in an IT context emerged in the 1960s with mainframe computers – large centrally operated computers with remote “terminals” for user access. They’re now used for everything from the PIN we enter at an ATM, to logging in to our computers and various websites.

But why do we need to “prove” our identity to the systems we access? And why are passwords so hard to get right?




Read more:
The long history, and short future, of the password


What makes a good password?

Until relatively recently, a good password might have been a word or phrase of as little as six to eight characters. But we now have minimum length guidelines. This is because of “entropy”.

When talking about passwords, entropy is the measure of predictability. The maths behind this isn’t complex, but let’s examine it with an even simpler measure: the number of possible passwords, sometimes referred to as the “password space”.

If a one-character password only contains one lowercase letter, there are only 26 possible passwords (“a” to “z”). By including uppercase letters, we increase our password space to 52 potential passwords.

The password space continues to expand as the length is increased and other character types are added.

Making a password longer or more complex greatly increases the potential ‘password space’. More password space means a more secure password.

Looking at the above figures, it’s easy to understand why we’re encouraged to use long passwords with upper and lowercase letters, numbers and symbols. The more complex the password, the more attempts needed to guess it.

However, the problem with depending on password complexity is that computers are highly efficient at repeating tasks – including guessing passwords.

Last year, a record was set for a computer trying to generate every conceivable password. It achieved a rate faster than 100,000,000,000 guesses per second.

By leveraging this computing power, cyber criminals can hack into systems by bombarding them with as many password combinations as possible, in a process called brute force attacks.

And with cloud-based technology, guessing an eight-character password can be achieved in as little as 12 minutes and cost as little as US$25.

Also, because passwords are almost always used to give access to sensitive data or important systems, this motivates cyber criminals to actively seek them out. It also drives a lucrative online market selling passwords, some of which come with email addresses and/or usernames.

You can purchase almost 600 million passwords online for just AU$14!

How are passwords stored on websites?

Website passwords are usually stored in a protected manner using a mathematical algorithm called hashing. A hashed password is unrecognisable and can’t be turned back into the password (an irreversible process).

When you try to login, the password you enter is hashed using the same process and compared to the version stored on the site. This process is repeated each time you login.

For example, the password “Pa$$w0rd” is given the value “02726d40f378e716981c4321d60ba3a325ed6a4c” when calculated using the SHA1 hashing algorithm. Try it yourself.

When faced with a file full of hashed passwords, a brute force attack can be used, trying every combination of characters for a range of password lengths. This has become such common practice that there are websites that list common passwords alongside their (calculated) hashed value. You can simply search for the hash to reveal the corresponding password.

This screenshot of a Google search result for the SHA hashed password value ‘02726d40f378e716981c4321d60ba3a325ed6a4c’ reveals the original password: ‘Pa$$w0rd’.

The theft and selling of passwords lists is now so common, a dedicated website — haveibeenpwned.com — is available to help users check if their accounts are “in the wild”. This has grown to include more than 10 billion account details.

If your email address is listed on this site you should definitely change the detected password, as well as on any other sites for which you use the same credentials.




Read more:
Will the hack of 500 million Yahoo accounts get everyone to protect their passwords?


Is more complexity the solution?

You would think with so many password breaches occurring daily, we would have improved our password selection practices. Unfortunately, last year’s annual SplashData password survey has shown little change over five years.

The 2019 annual SplashData password survey revealed the most common passwords from 2015 to 2019.

As computing capabilities increase, the solution would appear to be increased complexity. But as humans, we are not skilled at (nor motivated to) remember highly complex passwords.

We’ve also passed the point where we use only two or three systems needing a password. It’s now common to access numerous sites, with each requiring a password (often of varying length and complexity). A recent survey suggests there are, on average, 70-80 passwords per person.

The good news is there are tools to address these issues. Most computers now support password storage in either the operating system or the web browser, usually with the option to share stored information across multiple devices.

Examples include Apple’s iCloud Keychain and the ability to save passwords in Internet Explorer, Chrome and Firefox (although less reliable).

Password managers such as KeePassXC can help users generate long, complex passwords and store them in a secure location for when they’re needed.

While this location still needs to be protected (usually with a long “master password”), using a password manager lets you have a unique, complex password for every website you visit.

This won’t prevent a password from being stolen from a vulnerable website. But if it is stolen, you won’t have to worry about changing the same password on all your other sites.

There are of course vulnerabilities in these solutions too, but perhaps that’s a story for another day.




Read more:
Facebook hack reveals the perils of using a single account to log in to other services


The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Brianna O’Shea, Lecturer, Ethical Hacking and Defense, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can I still be hacked with 2FA enabled?



Shutterstock

David Tuffley, Griffith University

Cybersecurity is like a game of whack-a-mole. As soon as the good guys put a stop to one type of attack, another pops up.

Usernames and passwords were once good enough to keep an account secure. But before long, cybercriminals figured out how to get around this.

Often they’ll use “brute force attacks”, bombarding a user’s account with various password and login combinations in a bid to guess the correct one.

To deal with such attacks, a second layer of security was added in an approach known as two-factor authentication, or 2FA. It’s widespread now, but does 2FA also leave room for loopholes cybercriminals can exploit?

2FA via text message

There are various types of 2FA. The most common method is to be sent a single-use code as an SMS message to your phone, which you then enter following a prompt from the website or service you’re trying to access.

Most of us are familiar with this method as it’s favoured by major social media platforms. However, while it may seem safe enough, it isn’t necessarily.

Hackers have been known to trick mobile phone carriers (such as Telstra or Optus) into transferring a victim’s phone number to their own phone.




Read more:
$2.5 billion lost over a decade: ‘Nigerian princes’ lose their sheen, but scams are on the rise


Pretending to be the intended victim, the hacker contacts the carrier with a story about losing their phone, requesting a new SIM with the victim’s number to be sent to them. Any authentication code sent to that number then goes directly to the hacker, granting them access to the victim’s accounts.
This method is called SIM swapping. It’s probably the easiest of several types of scams that can circumvent 2FA.

And while carriers’ verification processes for SIM requests are improving, a competent trickster can talk their way around them.

Authenticator apps

The authenticator method is more secure than 2FA via text message. It works on a principle known as TOTP, or “time-based one-time password”.

TOTP is more secure than SMS because a code is generated on your device rather than being sent across the network, where it might be intercepted.

The authenticator method uses apps such as Google Authenticator, LastPass, 1Password, Microsoft Authenticator, Authy and Yubico.

However, while it’s safer than 2FA via SMS, there have been reports of hackers stealing authentication codes from Android smartphones. They do this by tricking the user into installing malware (software designed to cause harm) that copies and sends the codes to the hacker.

The Android operating system is easier to hack than the iPhone iOS. Apple’s iOS is proprietary, while Android is open-source, making it easier to install malware on.

2FA using details unique to you

Biometric methods are another form of 2FA. These include fingerprint login, face recognition, retinal or iris scans, and voice recognition. Biometric identification is becoming popular for its ease of use.

Most smartphones today can be unlocked by placing a finger on the scanner or letting the camera scan your face – much quicker than entering a password or passcode.

However, biometric data can be hacked, too, either from the servers where they are stored or from the software that processes the data.

One case in point is last year’s Biostar 2 data breach in which nearly 28 million biometric records were hacked. BioStar 2 is a security system that uses facial recognition and fingerprinting technology to help organisations secure access to buildings.

There can also be false negatives and false positives in biometric recognition. Dirt on the fingerprint reader or on the person’s finger can lead to false negatives. Also, faces can sometimes be similar enough to fool facial recognition systems.

Another type of 2FA comes in the form of personal security questions such as “what city did your parents meet in?” or “what was your first pet’s name?”




Read more:
Don’t be phish food! Tips to avoid sharing your personal information online


Only the most determined and resourceful hacker will be able to find answers to these questions. It’s unlikely, but still possible, especially as more of us adopt public online profiles.

Person looks at a social media post from a woman, on their mobile.
Often when we share our lives on the internet, we fail to consider what kinds of people may be watching.
Shutterstock

2FA remains best practice

Despite all of the above, the biggest vulnerability to being hacked is still the human factor. Successful hackers have a bewildering array of psychological tricks in their arsenal.

A cyber attack could come as a polite request, a scary warning, a message ostensibly from a friend or colleague, or an intriguing “clickbait” link in an email.

The best way to protect yourself from hackers is to develop a healthy amount of scepticism. If you carefully check websites and links before clicking through and also use 2FA, the chances of being hacked become vanishingly small.

The bottom line is that 2FA is effective at keeping your accounts safe. However, try to avoid the less secure SMS method when given the option.

Just as burglars in the real world focus on houses with poor security, hackers on the internet look for weaknesses.

And while any security measure can be overcome with enough effort, a hacker won’t make that investment unless they stand to gain something of greater value.The Conversation

David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Morrison government to invest $211 million in fuel security to protect against risk and price pressures


Michelle Grattan, University of Canberra

The Morrison government is acting to protect Australia’s fuel security as the international outlook becomes more uncertain and prices will be under increasing pressure.

Under the plan, operating through market and regulatory measures, the government will invest $211 million in new domestic diesel storage facilities, changes to create a minimum onshore stockholding, and support for local refineries.


Treasury

Announcing the program with Energy Minister Angus Taylor, Scott Morrison said the changes “will ensure Australian families and businesses can access the fuel they need, when they need it, for the lowest possible price”.

Australia’s fuel supplies are always potentially vulnerable to international instability, something that the pandemic – with its disruption to supply chains – has just reinforced. Local refineries are also under economic pressures, with potential consequences for prices.

The measures are:

  • a $200 million investment in a competitive grants program to build an extra 780 megalitres of onshore diesel storage with industry

  • creation of a minimum stockholding obligation for key transport fuels, and

  • working with refiners on a market design process for a refining production payment.

The government is seeking to have the $200 million grants for new storage matched by state governments or industry. Its focus will be on projects in strategic regional locations, connected to refineries and with connections to existing fuel infrastructure.

Morrison said fuel security was essential for Australia’s national security and the country was fortunate there hadn’t been a significant supply shock in more than 40 years. Fuel security underpinned the entire economy, and the industry itself supported thousands of workers, he said. “This plan is also about helping keep them in work.”

Taylor acknowledged the pressure refineries are under.

The government says modelling indicates a domestic refining capability is worth some $4.9 billion over a decade to Australian consumers is terms of price suppression.

The construction of diesel storage will support up to 950 jobs, with 75 new ongoing jobs, many in the regions, the government says.

“A minimum stockholding obligation will act as a safety net for petrol and jet fuel stocks and increased diesel stockholdings by 40%,” Morrison and Taylor said in their statement.

They stressed the government’s commitment to onshore refining capacity. The industry’s viability is under threat.

The planned production payment scheme is to protect from an estimated 1 cent per litre rise that, according to modelling, would hit fuel if all refineries onshore were to close. Refineries receiving the support will have to commit to stay operating locally.

Under the minimum stockholding requirements, petrol and jet fuel stocks would be kept no lower than current commercial levels, which are about 24 consumption days.

Diesel stocks would increase by 40%, to be at 28 consumption cover days. This would add about 10 days to Australia’s International Energy Agency compliance total.

In July Australia had 84 IEA days including stocks on water. Implementing a minimum stock holding obligation would bring Australia into line with most IEA members which regulate their fuel industries to meet their security needs. Under the IEA treaty member countries are required to have 90 days of stocks.

(IEA days and consumption cover days are different.)

Refineries will be exempt from the obligations to hold additional stocks.

The production payments will ensure a minimum value of 1.15 cents per litre to refineries. A competitive process will determine the location of new storage facilities.

The government says it recognises “the future refining sector in Australia will not look like the past. However, this framework will ensure the market is viable for both our future needs and can support Australia during a severe fuel disruption.”The Conversation

Michelle Grattan, Professorial Fellow, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Face masks and facial recognition will both be common in the future. How will they co-exist?



Pixabay, CC BY-SA

Paul Haskell-Dowland, Edith Cowan University

It’s surprising how quickly public opinion can change. Winding the clocks back 12 months, many of us would have looked at a masked individual in public with suspicion.

Now, some countries have enshrined face mask use in law. They’ve also been made compulsory in Victoria and are recommended in several other states.

One consequence of this is that facial recognition systems in place for security and crime prevention may no longer be able to fulfil their purpose. In Australia, most agencies are silent about the use of facial recognition.

But documents leaked earlier this year revealed Australian Federal Police and state police in Queensland, Victoria and South Australia all use Clearview AI, a commercial facial recognition platform. New South Wales police also admitted using a biometrics tool called PhotoTrac.




Read more:
Your face is part of Australia’s ‘national security weapon’: should you be concerned?


What is facial recognition?

Facial recognition involves using computing to identify human faces in images or videos, and then measuring specific facial characteristics. This can include the distance between eyes, and the relative positions of the nose, chin and mouth.

This information is combined to create a facial signature, or profile. When used for individual recognition – such as to unlock your phone – an image from the camera is compared to a recorded profile. This process of facial “verification” is relatively simple.

However, when facial recognition is used to identify faces in a crowd, it requires a significant database of profiles against which to compare the main image.

These profiles can be legally collected by enrolling large numbers of users into systems. But they’re sometimes collected through covert means.

Facial ‘verification’ (the method used to unlock smartphones) compares the main image with a single pre-saved facial signature. Facial ‘identification’ requires examining the image against an entire database of facial signatures.
teguhjatipras/pixabay

The problem with face masks

As facial signatures are based on mathematical models of the relative positions of facial features, anything that reduced the visibility of key characteristics (such as the nose, mouth and chin) interferes with facial recognition.

There are already many ways to evade or interfere with facial recognition technologies. Some of these evolved from techniques designed to evade number plate recognition systems.

Although the coronavirus pandemic has escalated concerns around the evasion of facial recognition systems, leaked US documents show these discussions taking place back in 2018 and 2019, too.

This clip shows how fashion designers are outsmarting facial recognition surveillance / YouTube.

And while the debate on the use and legality of facial recognition continues, the focus has recently shifted to the challenges presented by mask-wearing in public.

On this front, the US National Institute of Standards and Technology (NIST) coordinated a major research project to evaluate how masks impacted the performance of various facial recognition systems used across the globe.

Its report, published in July, found some algorithms struggled to correctly identify mask-wearing individuals up to 50% of the time. This was a significant error rate compared to when the same algorithms analysed unmasked faces.

Some algorithms even struggled to locate a face when a mask was covering too much of it.

Finding ways around the problem

There are currently no usable photo data sets of mask-wearing people that can be used to train and evaluate facial recognition systems.

The NIST study addressed this problem by superimposing masks (of various colours, sizes and positions) over images of faces, as seen here:

While this may not be a realistic portrayal of a person wearing a mask, it’s effective enough to study the effects of mask-wearing on facial recognition systems.

It’s possible images of real masked people would allow more details to be extracted to improve recognition systems – perhaps by estimating the nose’s position based on visible protrusions in the mask.

Many facial recognition technology vendors are already preparing for a future where mask use will continue, or even increase. One US company offers masks with customers’ faces printed on them, so they can unlock their smartphones without having to remove it.

Growing incentives for wearing masks

Even before the coronavirus pandemic, masks were a common defence against air pollution and viral infection in countries including China and Japan.




Read more:
I’ve always wondered: why many people in Asian countries wear masks, and whether they work


Political activists also wear masks to evade detection on the streets. Both the Hong Kong and Black Lives Matter protests have reinforced protesters’ desire to dodge facial recognition by authorities and government agencies.

As experts forecast a future with more pandemics, rising levels of air pollution, persisting authoritarian regimes and a projected increase in bushfires producing dangerous smoke – it’s likely mask-wearing will become the norm for at least a proportion of us.

Facial recognition systems will need to adapt. Detection will be based on features that remain visible such as the eyes, eyebrows, hairline and general shape of the face.

Such technologies are already under development. Several suppliers are offering upgrades and solutions that claim to deliver reliable results with mask-wearing subjects.

For those who oppose the use of facial recognition and wish to go undetected, a plain mask may suffice for now. But in the future they might have to consider alternatives, such as a mask printed with a fake computer-generated face.The Conversation

Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

TikTok can be good for your kids if you follow a few tips to stay safe


Tashatuvango/Shutterstock

Joanne Orlando, Western Sydney University

The video-sharing app TikTok is a hot political potato amid concerns over who has access to users’ personal data.

The United States has moved to ban the app. Other countries, including Australia, have expressed concern.

But does this mean your children who use this app are at risk? If you’re a parent, let me explain the issues and give you a few tips to make sure your kids stay safe.

A record-breaker

Never has an app for young people been so popular. By April this year the TikTok app had been downloaded more than 2 billion times worldwide.

The app recently broke all records for the most downloaded app in a quarterly period, with 315 million downloads globally in the first three months of 2020.

Its popularity with young Aussies has sky-rocketed. Around 1.6 million Australians use the app, including about one in five people born since 2006. That’s an estimated 537,000 young Australians.

Like all social media apps, TikTok siphons data about its users such as email address, contacts, IP address and geolocation information.

TikTok was fined $US5.8 million (A$8 million) to settle US government claims it illegally collected personal information from children.

As a Chinese company, ByteDance, owns TikTok, US President Donald Trump and others are also worried about the app handing over this data to the Chinese state. TikTok denies it does this.




Read more:
China could be using TikTok to spy on Australians, but banning it isn’t a simple fix


Just days ago the Trump administration signed an executive order to seek a ban on TikTok operating or interacting with US companies.

Youngsters still TikToking

There is no hint of this stopping our TikToking children. For them it’s business as usual, creating and uploading videos of themselves lip-syncing, singing, dancing or just talking.

The most recent trend on TikTok – Taylor Swift Love Story dance – has resulted in more than 1.5 million video uploads in around two weeks alone.

But the latest political issues with TikTok raise questions about whether children should be on this platform right now. More broadly, as we see copycat sites such as Instagram Reels launched, should children be using any social media platforms that focus on them sharing videos of themselves at all?

The pros and cons

The TikTok app has filled a genuine social need for this young age group. Social media sites can offer a sense of belonging to a group, such as a group focused on a particular interest, experience, social group or religion.

TikTok celebrates diversity and inclusivity. It can provide a place where young people can join together to support each other in their needs.

During the COVID-19 pandemic, TikTok has had huge numbers of videos with coronavirus-related hashtags such as #quarantine (65 billion views), #happyathome (19.5 billion views) and #safehands (5.4 billion views).

Some of these videos are funny, some include song and dance. The World Health Organisation even posted its own youth-oriented videos on TikTok to provide young people with reliable public health advice about COVID-19.

The key benefit is the platform became a place where young people joined together from all corners of the planet, to understand and take the stressful edge off the pandemic for themselves and others their age. Where else could they do that? The mental health benefits this offers can be important.

Let’s get creative

Another benefit lies in the creativity TikTok centres on. Passive use of technology, such as scrolling and checking social media with no purpose, can lead to addictive types of screen behaviours for young people.

Whereas planning and creating content, such as making their own videos, is meaningful use of technology and curbs addictive technology behaviours. In other words, if young people are going to use technology, using it creatively, purposefully and with meaning is the type of use we want to encourage.

Users of TikTok must be at least 13 years old, although it does have a limited app for under 13s.

Know the risks

Like all social media platforms, children are engaging in a space in which others can contact them. They may be engaging in adult concepts that they are not yet mature enough for, such as love gone wrong or suggestively twerking to songs.




Read more:
The secret of TikTok’s success? Humans are wired to love imitating dance moves


The platform moves very quickly, with a huge amount of videos, likes and comments uploaded every day. Taking it all in can lead to cognitive overload. This can be distracting for children and decrease focus on other aspects of their life including schoolwork.

Three young girls video themselves on a smartphone.
How to stay safe and still have fun with TikTok.
Luiza Kamalova/Shutterstock

So here are a few tips for keeping your child safe, as well as getting the most out of the creative/educational aspects of TikTok.

  1. as with any social network, use privacy settings to limit how much information your child is sharing

  2. if your child is creating a video, make sure it is reviewed before it’s uploaded to ensure it doesn’t include content that can be misconstrued or have negative implications

  3. if a child younger than 13 wants to use the app, there’s a section for this younger age group that includes extra safety and privacy features

  4. if you’re okay with your child creating videos for TikTok, then doing it together or helping them plan and film the video can be a great parent-child bonding activity

  5. be aware of the collection of data by TikTok, encourage your child to be aware of it, and help them know what they are giving away and the implications for them.

Happy (safe) TikToking!The Conversation

Joanne Orlando, Researcher: Children and Technology, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.