83% of Australians want tougher privacy laws. Now’s your chance to tell the government what you want



Shutterstock

Normann Witzleb, Monash University

Federal Attorney-General Christian Porter has called for submissions to the long-awaited review of the federal Privacy Act 1988.

This is the first wide-ranging review of privacy laws since the Australian Law Reform Commission produced a landmark report in 2008.

Australia has in the past often hesitated to adopt a strong privacy framework. The new review, however, provides an opportunity to improve data protection rules to an internationally competitive standard.

Here are some of the ideas proposed — and what’s at stake if we get this wrong.




Read more:
It’s time for privacy invasion to be a legal wrong


Australians care deeply about data privacy

Personal information has never had a more central role in our society and economy, and the government has a strong mandate to update Australia’s framework for the protection of personal information.

In the Australian Privacy Commissioner’s 2020 survey, 83% of Australians said they’d like the government to do more to protect the privacy of their data.

The intense debate about the COVIDSafe app earlier this year also shows Australians care deeply about their private information, even in a time of crisis.

Privacy laws and enforcement can hardly keep up with the ever-increasing digitalisation of our lives. Data-driven innovation provides valuable services that many of us use and enjoy. However, the government’s issues paper notes:

As Australians spend more of their time online, and new technologies emerge, such as artificial intelligence, more personal information about individuals is being captured and processed, raising questions as to whether Australian privacy law is fit for purpose.

The pandemic has accelerated the existing trend towards digitalisation and created a range of new privacy issues including working or studying at home, and the use of personal data in contact tracing.

Australians are rightly concerned they are losing control over their personal data.

So there’s no question the government’s review is sorely needed.

Issues of concern for the new privacy review

The government’s review follows the Australian Competition and Consumer Commission’s Digital Platforms Inquiry, which found that some data practices of digital platforms are unfair and undermine consumer trust. We rely heavily on digital platforms such as Google and Facebook for information, entertainment and engagement with the world around us.

Our interactions with these platforms leave countless digital traces that allow us to be profiled and tracked for profit. The Australian Competition and Consumer Commission (ACCC) found that the digital platforms make it hard for consumers to resist these practices and to make free and informed decisions regarding the collection, use and disclosure of their personal data.

The government has committed to implement most of the ACCC’s recommendations for stronger privacy laws to give us greater consumer control.

However, the reforms must go further. The review also provides an opportunity to address some long-standing weaknesses of Australia’s privacy regime.

The government’s issues paper, released to inform the review, identified several areas of particular concern. These include:

  • the scope of application of the Privacy Act, in particular the definition of “personal information” and current private sector exemptions

  • whether the Privacy Act provides an effective framework for promoting good privacy practices

  • whether individuals should have a direct right to sue for a breach of privacy obligations under the Privacy Act

  • whether a statutory tort for serious invasions of privacy should be introduced into Australian law, allowing Australians to go to court if their privacy is invaded

  • whether the enforcement powers of the Privacy Commissioner should be strengthened.

While most recent attention relates to improving consumer choice and control over their personal data, the review also brings back onto the agenda some never-implemented recommendations from the Australian Law Reform Commission’s 2008 review.

These include introducing a statutory tort for serious invasions of privacy, and extending the coverage of the Privacy Act.

Exemptions for small business and political parties should be reviewed

The Privacy Act currently contains several exemptions that limit its scope. The two most contentious exemptions have the effect that political parties and most business organisations need not comply with the general data protection standards under the Act.

The small business exemption is intended to reduce red tape for small operators. However, largely unknown to the Australian public, it means the vast majority of Australian businesses are not legally obliged to comply with standards for fair and safe handling of personal information.

Procedures for compulsory venue check-ins under COVID health regulations are just one recent illustration of why this is a problem. Some people have raised concerns that customers’ contact-tracing data, in particular collected via QR codes, may be exploited by marketing companies for targeted advertising.

A woman uses a QR code at a restaurant
Under current privacy laws, cafe and restaurant operators are exempt from complying with certain privacy obligations.
Shutterstock

Under current privacy laws, cafe and restaurant operators are generally exempt from complying with privacy obligations to undertake due diligence checks on third-party providers used to collect customers’ data.

The political exemption is another area of need of reform. As the Facebook/Cambridge Analytica scandal showed, political campaigning is becoming increasingly tech-driven.

However, Australian political parties are exempt from complying with the Privacy Act and anti-spam legislation. This means voters cannot effectively protect themselves against data harvesting for political purposes and micro-targeting in election campaigns through unsolicited text messages.

There is a good case for arguing political parties and candidates should be subject to the same rules as other organisations. It’s what most Australians would like and, in fact, wrongly believe is already in place.




Read more:
How political parties legally harvest your data and use it to bombard you with election spam


Trust drives innovation

Trust in digital technologies is undermined when data practices come across as opaque, creepy or unsafe.

There is increasing recognition that data protection drives innovation and adoption of modern applications, rather than impedes it.

A woman looks at her phone in the twilight.
Trust in digital technologies is undermined when data practices come across as opaque, creepy, or unsafe.
Shutterstock

The COVIDSafe app is a good example.
When that app was debated, the government accepted that robust privacy protections were necessary to achieve a strong uptake by the community.

We would all benefit if the government saw that this same principle applies to other areas of society where our precious data is collected.


Information on how to make a submission to the federal government review of the Privacy Act 1988 can be found here.




Read more:
People want data privacy but don’t always know what they’re getting


The Conversation


Normann Witzleb, Associate Professor in Law, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Australia can reap the benefits and dodge the dangers of the Internet of Things



Shutterstock

Kayleen Manwaring, UNSW and Peter Leonard, UNSW

The Internet of Things (IoT) is already all around us. Online devices have become essential in industries from manufacturing and healthcare to agriculture and environmental management, not to mention our own homes. Digital consulting firm Ovum estimates that by 2022 Australian homes will host more than 47 million IoT devices, and the value of the global market will exceed US$1 trillion.

The IoT presents great opportunities, but it brings many risks too. Problems include excessive surveillance, loss of privacy, transparency and control, and reliance on unsafe or unsuitable services or devices.




Read more:
Explainer: the Internet of Things


In some places, such as the European Union, Germany, South Korea and the United Kingdom, governments have been quick to develop policies and some limited regulation to take advantage of the technology and mitigate its harmful impacts.

Australia has been late to react. Even recent moves by the federal government to make IoT devices more secure have been far behind international developments.

A report launched today by the Australian Council of Learned Academies (ACOLA) may help get Australia up to speed. It supplies a wide-ranging, peer-reviewed base of evidence about opportunities, benefits and challenges the IoT presents Australia over the next decade.

Benefits of the Internet of Things

The report examines how we can improve our lives with IoT-related technologies. It explores a range of applications across Australian cities and rural, regional and remote areas.

Some IoT services are already available, such as the Smart Cities and Suburbs program run by local and federal governments. This program funds projects in areas such as traffic congestion, waste management and urban safety.

Health applications are also on the rise. The University of New England has piloted the remote monitoring of COVID-19 patients with mild symptoms using IoT-enabled pulse oximeters.

Augmented and virtual reality applications too are becoming more common. IoT devices can track carbon emissions in supply chains and energy use in homes. IoT services can also help governments make public transport infrastructure more efficient.

The benefits of the IoT won’t only be felt in cities. There may be even more to be gained in rural, regional and remote areas. IoT can aid agriculture in many ways, as well as working to prevent and manage bushfires and other environmental disasters. Sophisticated remote learning and health care will also benefit people outside urban areas.

While some benefits of the IoT will be felt everywhere, some will have more impact in cities and others in rural, remote and regional areas.
ACOLA, CC BY-NC

Opportunities for the Australian economy

The IoT presents critical opportunities for economic growth. In 2016-17, IoT activity was already worth A$74.3 billion to the Australian economy.

The IoT can facilitate more data-informed processes and automation (also known as Industry 4.0). This has immediate potential for substantial benefits.

One opportunity for Australia is niche manufacturing. Making bespoke products would be more efficient with IoT capability, which would let Australian businesses reach a consumer market with wide product ranges but low domestic volumes due to our small population.

Agricultural innovation enabled by the IoT, using Australia’s existing capabilities and expertise, is another promising area for investment.




Read more:
Six things every consumer should know about the ‘Internet of Things’


Risks of the Internet of Things

IoT devices can collect huge amounts of sensitive data, and controlling that data and keeping it secure presents significant risks. However, the Australian community is not well informed about these issues and some IoT providers are slow to explain appropriate and safe use of IoT devices and services.

These issues make it difficult for consumers to tell good practice from bad, and do not inspire trust in IoT. Lack of consistent international IoT standards can also make it difficult for different devices to work together, and creates a risk that users will be “locked in” to products from a single supplier.

In IoT systems it can also be very complex to determine who is responsible for any particular fault or issue, because of the many possible combinations of product, hardware, software and services. There will also be many contracts and user agreements, creating contractual complexity that adds to already difficult legal questions.




Read more:
Are your devices spying on you? Australia’s very small step to make the Internet of Things safer


The increased surveillance made possible by the IoT can lead to breaches of human rights. Partially or fully automated decision-making can also to discrimination and other socially unacceptable outcomes.

And while the IoT can assist environmental sustainability, it can also increase environmental costs and impacts. The ACOLA report estimates that by 2050 the IoT could consume between 1 and 5% of the world’s electricity.

Other risks of harmful social consequences include an increased potential for domestic violence, the targeting of children by malicious actors and corporate interests, increased social withdrawal and the exacerbation of existing inequalities for vulnerable populations. The recent death of a woman in rural New South Wales being treated via telehealth provides just one example of these risks.

Maximising the benefits of the IoT

The ACOLA report makes several recommendations for Australia to take advantage of the IoT while minimising its downsides.

ACOLA advocates a national approach, focusing on areas of strength. It recommends continuing investment in smart cities and regions, and more collaboration between industry, government and education.

ACOLA also recommends increased community engagement, better ethical and regulatory frameworks for data and baseline security standards.

The ACOLA report is only a beginning. More specific work needs to be done to make the IoT work for Australia and its citizens.

The report does outline key areas for future research. These include the actual experiences of people in smart cities and homes, the value of data, environmental impacts and the use of connected and autonomous vehicles.The Conversation

Kayleen Manwaring, Senior Lecturer, School of Taxation & Business Law, UNSW and Peter Leonard, Professor of Practice (IT Systems and Management and Business and Taxation Law), UNSW Business School, Sydney, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

3.2 billion images and 720,000 hours of video are shared online daily. Can you sort real from fake?



Twitter screenshots/Unsplash, Author provided

T.J. Thomson, Queensland University of Technology; Daniel Angus, Queensland University of Technology, and Paula Dootson, Queensland University of Technology

Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.

Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330”.

The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?

While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you.

Seeing shouldn’t always be believing

Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25% of journalists globally use social media content verification tools, according to the International Centre for Journalists.




Read more:
Facebook is tilting the political playing field more than ever, and it’s no accident


Could you spot a doctored image?

Consider this photo of Martin Luther King Jr.

This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Fully and partially synthetic content

Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.

AI-generated faces.
These people don’t exist, they’re just images generated by artificial intelligence.
Generated Photos, CC BY

Editing pixel values and the (not so) simple crop

Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.

Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).
AP

But what about edits that only alter pixel values such as colour, saturation or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded:

No racial implication was intended, by Time or by the artist.

Tools for debunking digital fakery

For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

  • relies on unedited copies of the media already being online
  • doesn’t search the entire web
  • doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
  • returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.



Read more:
Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes


Most reliable tools are sophisticated

Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.

Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “fauxtography”.

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

  • was it originally made for social media?
  • how widely and for how long was it circulated?
  • what responses did it receive?
  • who were the intended audiences?

Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Media, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology, and Paula Dootson, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

NBN upgrades explained: how will they make internet speeds faster? And will the regions miss out?



Shutterstock

Thas Ampalavanapillai Nirmalathas, University of Melbourne

The federal government has announced a A$3.5 billion upgrade to the National Broadband Network (NBN) that will grant two million households on-demand access to faster fibre-to-the-node (FTTN) internet by 2023.

Reports from the ABC suggest the plan would go as far as to upgrade the FTTN services to fibre-to-the-premises (FTTP) – although this wasn’t explicitly said in Minister for Communications Paul Fletcher’s announcement.

The minister said the upgrade would involve expanding current FTTN connections to run along more streets across the country, giving people the option to connect to broadband speeds of up to one gigabit per second. Improvements have also been promised for the hybrid fibre coaxial (HFC) and fibre-to-the-curb (FTTC) systems.

Altogether the upgrade is expected to give about six million households access to internet speeds of up to one gigabit per second. But how will the existing infrastructure be boosted? And who will miss out?

Getting ahead of the terminology

Let’s first understand the various terms used to describe aspects of the NBN network.

Fibre to the Premises (FTTP)

FTTP refers to households with an optical fibre connection running from a device on a wall of the house directly to the network. This provides reliable high-speed internet.

The “network” simply refers to the exchange point from which households’ broadband connections are passed to service providers, such as Telstra, who help them get connected.

In an FTTP network, fibre optic connectors in the back of distribution hub panels connect homes to broadband services.
Shutterstock

Fibre to the Node (FTTN)

The FTTN system serves about 4.7 million premises in Australia, out of a total 11.5 million covered under the NBN.

With FTTN, households are connected via a copper line to a “node” in their neighbourhood. This node is further connected to the network with fibre optic cables that transfer data much faster than copper cables can.

With FTTN systems, the quality of the broadband service depends on the length of the copper cable and the choice of technology used to support data transmission via this cable.

It’s technically possible to offer high internet speeds when copper cables are very short and the latest data transmission technologies are used.

In reality, however, Australia’s FTTN speeds using a fibre/copper mix have been slow. An FTTN connection’s reliability also depends on network conditions, such as the age of the copper cabling and whether any of the signal is leaking due to degradation.

Illustration of fibre optic cables.
Fibre optic cables use pulses of light for high-speed data transmission across long distances.
Shutterstock

Fibre to the Curb (FTTC)

The limitations of FTTN mentioned above can be sidestepped by extending fibre cables from the network right up to a curbside “distribution point unit” nearer to households. This unit then becomes the “node” of the network.

FTTC allows significantly faster data transmission. This is because it services relatively fewer households (allowing better signal transmission to each one) and reduces the length of copper cable relied upon.

Hybrid Fibre Coaxial (HFC)

In many areas, the NBN uses coaxial cables instead of copper cables. These were first installed by Optus and Telstra in the 1990s to deliver cable broadband and television. They’ve since been modernised for use in the NBN’s fibre network.

In theory, HFC systems should be able to offer internet speeds of more than 100 megabits per second. But many households have been unable to achieve this due to the poor condition of cabling infrastructure in some parts, as well as large numbers of households sharing a single coaxial cable.

Coaxial cables are the most limiting part of the HFC system. So expanding the length of fibre cabling (and shortening the coaxial cables being used) would allow faster internet speeds. The NBN’s 2020 corporate plan identifies doing this as a priority.

Minister Fletcher today said the planned upgrades would ensure all customers serviced by HFC would have access to speeds of up to one gigabit per second. Currently, only 7% of HFC customers do.

Mixing things up isn’t always a good idea

Under the original NBN plan, the Labor government in 2009 promised optical fibre connections for 93% of all Australian households.

Successive reviews led to the use of multiple technologies in the network, rather than the full-fibre network Labor envisioned. Many households are not able to upgrade their connection because of limitations to the technology available in their neighbourhood.




Read more:
The NBN: how a national infrastructure dream fell short


Also, many businesses currently served by FTTN can’t access internet speeds that meet their needs. To avoid internet speeds hindering their work, many businesses need a minimum speed between 100 megabits and 1 gigabit per second, depending on their scale.

Currently, no FTTN services and few HFC services can support such speeds.

Moreover, the Australian Competition and Consumer Commission’s NBN monitoring report published in May (during the pandemic) found in about 95% of cases, NBN plans only delivered 83-91% of the maximum advertised speed.

The report also showed 10% of the monitored services were underperforming – and 95% of these were FTTN services. This makes a strong case for the need to upgrade FTTN.

Who will benefit?

While the NBN’s most recent corporate plan identifies work to be done across its various offerings (FTTN, FTTC, HFC, fixed wireless), it’s unclear exactly how much each system stands to gain from today’s announcements.

Ideally, urban and regional households that can’t access 100 megabits per second speeds would be prioritised for fibre expansion. The expanded FTTN network should also cover those struggling to access reliable broadband in regional Australia.

Bringing fibre cabling to households in remote areas would be difficult. One option, however, could be to extend fibre connections to an expanded network of base stations in regional Australia, thereby improving the NBN’s fixed wireless connectivity capacity.

These base stations “beam” signals to nearby premises. Installing more stations would mean fewer premises covered by each (and therefore better connectivity for each).

Regardless, it’s important the upgrades happen quickly. Many NBN customers now working and studying from home will be waiting eagerly for a much-needed boost to their internet speed.




Read more:
How to boost your internet speed when everyone is working from home


The Conversation


Thas Ampalavanapillai Nirmalathas, Group Head – Electronic and Photonic Systems Group and Professor of Electrical and Electronic Engineering, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?



Paul Haskell-Dowland, Author provided

Paul Haskell-Dowland, Edith Cowan University and Brianna O’Shea, Edith Cowan University

Passwords have been used for thousands of years as a means of identifying ourselves to others and in more recent times, to computers. It’s a simple concept – a shared piece of information, kept secret between individuals and used to “prove” identity.

Passwords in an IT context emerged in the 1960s with mainframe computers – large centrally operated computers with remote “terminals” for user access. They’re now used for everything from the PIN we enter at an ATM, to logging in to our computers and various websites.

But why do we need to “prove” our identity to the systems we access? And why are passwords so hard to get right?




Read more:
The long history, and short future, of the password


What makes a good password?

Until relatively recently, a good password might have been a word or phrase of as little as six to eight characters. But we now have minimum length guidelines. This is because of “entropy”.

When talking about passwords, entropy is the measure of predictability. The maths behind this isn’t complex, but let’s examine it with an even simpler measure: the number of possible passwords, sometimes referred to as the “password space”.

If a one-character password only contains one lowercase letter, there are only 26 possible passwords (“a” to “z”). By including uppercase letters, we increase our password space to 52 potential passwords.

The password space continues to expand as the length is increased and other character types are added.

Making a password longer or more complex greatly increases the potential ‘password space’. More password space means a more secure password.

Looking at the above figures, it’s easy to understand why we’re encouraged to use long passwords with upper and lowercase letters, numbers and symbols. The more complex the password, the more attempts needed to guess it.

However, the problem with depending on password complexity is that computers are highly efficient at repeating tasks – including guessing passwords.

Last year, a record was set for a computer trying to generate every conceivable password. It achieved a rate faster than 100,000,000,000 guesses per second.

By leveraging this computing power, cyber criminals can hack into systems by bombarding them with as many password combinations as possible, in a process called brute force attacks.

And with cloud-based technology, guessing an eight-character password can be achieved in as little as 12 minutes and cost as little as US$25.

Also, because passwords are almost always used to give access to sensitive data or important systems, this motivates cyber criminals to actively seek them out. It also drives a lucrative online market selling passwords, some of which come with email addresses and/or usernames.

You can purchase almost 600 million passwords online for just AU$14!

How are passwords stored on websites?

Website passwords are usually stored in a protected manner using a mathematical algorithm called hashing. A hashed password is unrecognisable and can’t be turned back into the password (an irreversible process).

When you try to login, the password you enter is hashed using the same process and compared to the version stored on the site. This process is repeated each time you login.

For example, the password “Pa$$w0rd” is given the value “02726d40f378e716981c4321d60ba3a325ed6a4c” when calculated using the SHA1 hashing algorithm. Try it yourself.

When faced with a file full of hashed passwords, a brute force attack can be used, trying every combination of characters for a range of password lengths. This has become such common practice that there are websites that list common passwords alongside their (calculated) hashed value. You can simply search for the hash to reveal the corresponding password.

This screenshot of a Google search result for the SHA hashed password value ‘02726d40f378e716981c4321d60ba3a325ed6a4c’ reveals the original password: ‘Pa$$w0rd’.

The theft and selling of passwords lists is now so common, a dedicated website — haveibeenpwned.com — is available to help users check if their accounts are “in the wild”. This has grown to include more than 10 billion account details.

If your email address is listed on this site you should definitely change the detected password, as well as on any other sites for which you use the same credentials.




Read more:
Will the hack of 500 million Yahoo accounts get everyone to protect their passwords?


Is more complexity the solution?

You would think with so many password breaches occurring daily, we would have improved our password selection practices. Unfortunately, last year’s annual SplashData password survey has shown little change over five years.

The 2019 annual SplashData password survey revealed the most common passwords from 2015 to 2019.

As computing capabilities increase, the solution would appear to be increased complexity. But as humans, we are not skilled at (nor motivated to) remember highly complex passwords.

We’ve also passed the point where we use only two or three systems needing a password. It’s now common to access numerous sites, with each requiring a password (often of varying length and complexity). A recent survey suggests there are, on average, 70-80 passwords per person.

The good news is there are tools to address these issues. Most computers now support password storage in either the operating system or the web browser, usually with the option to share stored information across multiple devices.

Examples include Apple’s iCloud Keychain and the ability to save passwords in Internet Explorer, Chrome and Firefox (although less reliable).

Password managers such as KeePassXC can help users generate long, complex passwords and store them in a secure location for when they’re needed.

While this location still needs to be protected (usually with a long “master password”), using a password manager lets you have a unique, complex password for every website you visit.

This won’t prevent a password from being stolen from a vulnerable website. But if it is stolen, you won’t have to worry about changing the same password on all your other sites.

There are of course vulnerabilities in these solutions too, but perhaps that’s a story for another day.




Read more:
Facebook hack reveals the perils of using a single account to log in to other services


The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Brianna O’Shea, Lecturer, Ethical Hacking and Defense, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can I still be hacked with 2FA enabled?



Shutterstock

David Tuffley, Griffith University

Cybersecurity is like a game of whack-a-mole. As soon as the good guys put a stop to one type of attack, another pops up.

Usernames and passwords were once good enough to keep an account secure. But before long, cybercriminals figured out how to get around this.

Often they’ll use “brute force attacks”, bombarding a user’s account with various password and login combinations in a bid to guess the correct one.

To deal with such attacks, a second layer of security was added in an approach known as two-factor authentication, or 2FA. It’s widespread now, but does 2FA also leave room for loopholes cybercriminals can exploit?

2FA via text message

There are various types of 2FA. The most common method is to be sent a single-use code as an SMS message to your phone, which you then enter following a prompt from the website or service you’re trying to access.

Most of us are familiar with this method as it’s favoured by major social media platforms. However, while it may seem safe enough, it isn’t necessarily.

Hackers have been known to trick mobile phone carriers (such as Telstra or Optus) into transferring a victim’s phone number to their own phone.




Read more:
$2.5 billion lost over a decade: ‘Nigerian princes’ lose their sheen, but scams are on the rise


Pretending to be the intended victim, the hacker contacts the carrier with a story about losing their phone, requesting a new SIM with the victim’s number to be sent to them. Any authentication code sent to that number then goes directly to the hacker, granting them access to the victim’s accounts.
This method is called SIM swapping. It’s probably the easiest of several types of scams that can circumvent 2FA.

And while carriers’ verification processes for SIM requests are improving, a competent trickster can talk their way around them.

Authenticator apps

The authenticator method is more secure than 2FA via text message. It works on a principle known as TOTP, or “time-based one-time password”.

TOTP is more secure than SMS because a code is generated on your device rather than being sent across the network, where it might be intercepted.

The authenticator method uses apps such as Google Authenticator, LastPass, 1Password, Microsoft Authenticator, Authy and Yubico.

However, while it’s safer than 2FA via SMS, there have been reports of hackers stealing authentication codes from Android smartphones. They do this by tricking the user into installing malware (software designed to cause harm) that copies and sends the codes to the hacker.

The Android operating system is easier to hack than the iPhone iOS. Apple’s iOS is proprietary, while Android is open-source, making it easier to install malware on.

2FA using details unique to you

Biometric methods are another form of 2FA. These include fingerprint login, face recognition, retinal or iris scans, and voice recognition. Biometric identification is becoming popular for its ease of use.

Most smartphones today can be unlocked by placing a finger on the scanner or letting the camera scan your face – much quicker than entering a password or passcode.

However, biometric data can be hacked, too, either from the servers where they are stored or from the software that processes the data.

One case in point is last year’s Biostar 2 data breach in which nearly 28 million biometric records were hacked. BioStar 2 is a security system that uses facial recognition and fingerprinting technology to help organisations secure access to buildings.

There can also be false negatives and false positives in biometric recognition. Dirt on the fingerprint reader or on the person’s finger can lead to false negatives. Also, faces can sometimes be similar enough to fool facial recognition systems.

Another type of 2FA comes in the form of personal security questions such as “what city did your parents meet in?” or “what was your first pet’s name?”




Read more:
Don’t be phish food! Tips to avoid sharing your personal information online


Only the most determined and resourceful hacker will be able to find answers to these questions. It’s unlikely, but still possible, especially as more of us adopt public online profiles.

Person looks at a social media post from a woman, on their mobile.
Often when we share our lives on the internet, we fail to consider what kinds of people may be watching.
Shutterstock

2FA remains best practice

Despite all of the above, the biggest vulnerability to being hacked is still the human factor. Successful hackers have a bewildering array of psychological tricks in their arsenal.

A cyber attack could come as a polite request, a scary warning, a message ostensibly from a friend or colleague, or an intriguing “clickbait” link in an email.

The best way to protect yourself from hackers is to develop a healthy amount of scepticism. If you carefully check websites and links before clicking through and also use 2FA, the chances of being hacked become vanishingly small.

The bottom line is that 2FA is effective at keeping your accounts safe. However, try to avoid the less secure SMS method when given the option.

Just as burglars in the real world focus on houses with poor security, hackers on the internet look for weaknesses.

And while any security measure can be overcome with enough effort, a hacker won’t make that investment unless they stand to gain something of greater value.The Conversation

David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Keep calm, but don’t just carry on: how to deal with China’s mass surveillance of thousands of Australians



Shutterstock

Bruce Baer Arnold, University of Canberra

National security is like sausage-making. We might enjoy the tasty product, but want to look away from the manufacturing.

Recent news that Chinese company Zhenhua Data is profiling more than 35,000 Australians isn’t a surprise to people with an interest in privacy, security and social networks. We need to think critically about this, knowing we can do something to prevent it from happening again.

Reports indicate Zhenhua provides services to the Chinese government. It may also provide services to businesses in China and overseas.

The company operates under Chinese law and doesn’t appear to have a presence in Australia. That means we can’t shut it down or penalise it for a breach of our law. Also, Beijing is unlikely to respond to expressions of outrage from Australia or condemnation by our government – especially amid recent sabre-rattling.




Read more:
Journalists have become diplomatic pawns in China’s relations with the West, setting a worrying precedent


Zhenhua is reported to have data on more than 35,000 Australians – a list saturated by political leaders and prominent figures. Names, birthdays, addresses, marital status, photographs, political associations, relatives and social media account details are among the information extracted.

It seems Zhenhua has data on a wide range of Australians, including public figures such as Victorian supreme court judge Anthony Cavanough, Australia’s former ambassador to China Geoff Raby, former NSW premier and federal foreign affairs minister Bob Carr, tech billionaire Mike Cannon-Brookes and singer Natalie Imbruglia.

It’s not clear how individuals are being targeted. The profiling might be systematic. It might instead be conducted on the basis of a specific industry, academic discipline, public prominence or perceived political influence.

It’s unlikely Zhenhua profiles random members of the public. That means there’s no reason for average citizens without a China connection to be worried.

Still, details around the intelligence gathering elude us, so best practise for the public is to maintain as much online privacy as possible, whenever possible.

Overall, we don’t know much about Zhenhua’s goals. And what we do know came from a leak to a US academic who sensibly fled China in 2018, fearing for his safety.

Pervasive surveillance is the norm

Pervasive surveillance is now a standard feature of all major governments, which often rely on surveillance-for-profit companies. Governments in the West buy services from big data analytic companies such as Palantir.

Australia’s government gathers information outside our borders, too. Take the bugging of the Timor-Leste government, a supposed friend rather than enemy.

How sophisticated is the plot?

Revelations about Zhenhua have referred to the use of artificial intelligence and the “mosaic” method of intelligence gathering. But this is probably less exciting than it sounds.

Reports indicate much of the data was extracted from online open sources. Access to much of this would have simply involved using algorithms to aggregate targets’ names, dates, qualifications and work history data found on publicly available sites.

The algorithms then help put the individual pieces of the “mosaic” together and fill in the holes on the basis of each individual’s relationship with others, such as their as peers, colleagues or partners.

Some of the data for the mosaic may come from hacking or be gathered directly by the profiler. According to the ABC, some data that landed in Zhenhua’s lap was taken from the dark web.

One seller might have spent years copying data from university networks. For example, last year the Australian National University acknowledged major personal data breaches had taken place, potentially extending back 19 years.

This year there was also the unauthorised (and avoidable) access by cybercriminals to NSW government data on 200,000 people.

While it may be confronting to know a foreign state is compiling information on Australian citizens, it should be comforting to learn sharing this information can be avoided – if you’re careful.

What’s going on in the black box?

One big question is what Zhenhua’s customers in China’s political and business spheres might do with the data they’ve compiled on Australian citizens. Frankly, we don’t know. National security is often a black box and we are unlikely ever to get verifiable details.

Apart from distaste at being profiled, we might say being watched is no big deal, especially given many of those on the list are already public figures. Simply having an AI-assisted “Who’s Who” of prominent Australians isn’t necessarily frightening.

However, it is of concern if the information collected is being used for disinformation, such as through any means intended to erode trust in political processes, or subvert elections.

For instance, a report published in June by the Australian Strategic Policy Institute detailed how Chinese-speaking people in Australia were being targeted by a “persistent, large-scale influence campaign linked to Chinese state actors”.

Illustration of surveillance camera with Chinese flag draped over.
In June, Prime Minister Scott Morrison announced China was supposedly behind a major state-based attack against several of Australia’s sectors, including all levels of government.
Shutterstock

Deep fake videos are another form of subversion of increasing concern to governments and academics, particularly in the US.




Read more:
Deepfake videos could destroy trust in society – here’s how to restore it


Can we fix this?

We can’t make Zhenhua and its competitors disappear. Governments think they are too useful.

Making everything visible to state surveillance is now the ambition of many law enforcement bodies and all intelligence agencies. It’s akin to Google and its competitors wanting to know (and sell) everything about us, without regard for privacy as a human right.

We can, however, build resilience.

One way is to require government agencies and businesses to safeguard their databases. That hasn’t been the case with the NSW government, Commonwealth governments, Facebook, dating services and major hospitals.

In Australia, we need to adopt recommendations by law reform inquiries and establish a national right to privacy. The associated privacy tort would incentivise data custodians and also encourage the public to avoid oversharing online.

In doing so, we might be better placed to condemn both China and other nations participating in unethical intelligence gathering, while properly acknowledging our own wrongdoings in Timor-Leste.The Conversation

Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Private browsing: What it does – and doesn’t do – to shield you from prying eyes on the web



The major browsers have privacy modes, but don’t confuse privacy for anonymity.
Oleg Mishutin/iStock via Getty Images

Lorrie Cranor, Carnegie Mellon University and Hana Habib, Carnegie Mellon University

Many people look for more privacy when they browse the web by using their browsers in privacy-protecting modes, called “Private Browsing” in Mozilla Firefox, Opera and Apple Safari; “Incognito” in Google Chrome; and “InPrivate” in Microsoft Edge.

These private browsing tools sound reassuring, and they’re popular. According to a 2017 survey, nearly half of American internet users have tried a private browsing mode, and most who have tried it use it regularly.

However, our research has found that many people who use private browsing have misconceptions about what protection they’re gaining. A common misconception is that these browser modes allow you to browse the web anonymously, surfing the web without websites identifying you and without your internet service provider or your employer knowing what websites you visit. The tools actually provide much more limited protections.

Other studies conducted by the Pew Research Center and the privacy-protective search engine company DuckDuckGo have similar findings. In fact, a recent lawsuit against Google alleges that internet users are not getting the privacy protection they expect when using Chrome’s Incognito mode.

How it works

While the exact implementation varies from browser to browser, what private browsing modes have in common is that once you close your private browsing window, your browser no longer stores the websites you visited, cookies, user names, passwords and information from forms you filled out during that private browsing session.

Essentially, each time you open a new private browsing window you are given a “clean slate” in the form of a brand new browser window that has not stored any browsing history or cookies. When you close your private browsing window, the slate is wiped clean again and the browsing history and cookies from that private browsing session are deleted. However, if you bookmark a site or download a file while using private browsing mode, the bookmarks and file will remain on your system.

Although some browsers, including Safari and Firefox, offer some additional protection against web trackers, private browsing mode does not guarantee that your web activities cannot be linked back to you or your device. Notably, private browsing mode does not prevent websites from learning your internet address, and it does not prevent your employer, school or internet service provider from seeing your web activities by tracking your IP address.

Reasons to use it

We conducted a research study in which we identified reasons people use private browsing mode. Most study participants wanted to protect their browsing activities or personal data from other users of their devices. Private browsing is actually pretty effective for this purpose.

We found that people often used private browsing to visit websites or conduct searches that they did not want other users of their device to see, such as those that might be embarrassing or related to a surprise gift. In addition, private browsing is an easy way to log out of websites when borrowing someone else’s device – so long as you remember to close the window when you are done.

Smart phone displaying Google incognito mode
Private browsing can help cover your internet tracks by automatically deleting your browsing history and cookies when you close the browser.
Avishek Das/SOPA Images/LightRocket via Getty Images

Private browsing provides some protection against cookie-based tracking. Since cookies from your private browsing session are not stored after you close your private browsing window, it’s less likely that you will see online advertising in the future related to the websites you visit while using private browsing.

[Get the best of The Conversation, every weekend. Sign up for our weekly newsletter.]

Additionally, as long as you have not logged into your Google account, any searches you make will not appear in your Google account history and will not affect future Google search results. Similarly, if you watch a video on YouTube or other service in private browsing, as long as you are not logged into that service, your activity does not affect the recommendations you get in normal browsing mode.

What it doesn’t do

Private browsing does not make you anonymous online. Anyone who can see your internet traffic – your school or employer, your internet service provider, government agencies, people snooping on your public wireless connection – can see your browsing activity. Shielding that activity requires more sophisticated tools that use encryption, like virtual private networks.

Private browsing also offers few security protections. In particular, it does not prevent you from downloading a virus or malware to your device. Additionally, private browsing does not offer any additional protection for the transmission of your credit card or other personal information to a website when you fill out an online form.

It is also important to note that the longer you leave your private browsing window open, the more browsing data and cookies it accumulates, reducing your privacy protection. Therefore, you should get in the habit of closing your private browsing window frequently to wipe your slate clean.

What’s in a name

It is not all that surprising that people have misconceptions about how private browsing mode works; the word “private” suggests a lot more protection than these modes actually provide.

Furthermore, a 2018 research study found that the disclosures shown on the landing pages of private browsing windows do little to dispel misconceptions that people have about these modes. Chrome provides more information about what is and is not protected than most of the other browsers, and Mozilla now links to an informational page on the common myths related to private browsing.

However, it may be difficult to dispel all of these myths without changing the name of the browsing mode and making it clear that private browsing stops your browser from keeping a record of your browsing activity, but it isn’t a comprehensive privacy shield.The Conversation

Lorrie Cranor, Professor of Computer Science and of Engineering & Public Policy, Carnegie Mellon University and Hana Habib, Graduate Research Assistant at the Institute for Software Research, Carnegie Mellon University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Morrison’s $1.3 billion for more ‘cyber spies’ is an incremental response to a radical problem



Mick Tsikas/AAP

Greg Austin, UNSW

The federal government has announced it will spend more than a billion dollars over the next ten years to boost Australia’s cyber defences.

This comes barely a week after Prime Minister Scott Morrison warned the country was in the grip of a “sophisticated” cyber attack by a “state-based” actor, widely reported to be China.




Read more:
Morrison announces repurposing of defence money to fight increasing cyber threats


The announcement can be seen as a mix of the right stuff and political window dressing – deflecting attention away from Australia’s underlying weaknesses when it comes to cyber security.

What is the funding for?

Morrison’s cyber announcement includes a package of measures totalling $1.35 billion over ten years.

This includes funding to disrupt offshore cyber crime, intelligence sharing between government and industry, new research labs and more than 500 “cyber spy” jobs.

As Morrison explained

This … will mean that we can identify more cyber threats, disrupt more foreign cyber criminals, build more partnerships with industry and government and protect more Australians.

They key aim is to help the country’s cyber intelligence agency, the Australian Signals Directorate (ASD), to know as soon as possible who is attacking Australia, with what, and how the attack can best be stopped.

Australia’s cyber deficiencies

Australia certainly needs to do more to defend itself against cyber attacks.

Intelligence specialists like top public servant Nick Warner have been advocating for more attention for cyber threats for years.

Concerns about Australia’s cyber defences have been raised for years.
http://www.shutterstock.com

The government is also acknowledging publicly that the threats are increasing.

Earlier this month, Morrison held an unusual press conference to announce that Australia was under cyber attack.

While he did not specify who by, government statements made plain it was the same malicious actor (a foreign government) using the same tools as an attack reported in May this year.

Related attacks on Australia using similar malware were also identified in May 2019.

This type of threat is called an “advanced persistent threat” because it is hard to get it out of a system, even if you know it is there.




Read more:
Australia is under sustained cyber attack, warns the government. What’s going on, and what should businesses do?


All countries face enormous difficulties in cyber defence, and Australia is arguably among the top states in cyber security world-wide. Yet after a decade of incremental reforms, the government has been unable to organise all of its own departments to implement more than basic mitigation strategies.

New jobs in cyber security

The biggest slice of the $1.35 billion is a “$470 million investment to expand our cyber security workforce”.

This is by any measure an essential underpinning and is to be applauded.

The Morrison government wants to recruit more than 500 new ASD employees.
http://www.shutterstock.com

But it is not yet clear how “new” these new jobs are.

The 2016 Defence White Paper announced a ten year workforce expansion of 1,700 jobs in intelligence and cyber security. This included a 900-person joint cyber unit in the Australian Defence Force, announced in 2017.

The newly mooted expansion for ASD will also need to be undertaken gradually. It will be impossible to find hundreds of additional staff with the right skills straight away.

The skills needed cut across many sub-disciplines of cyber operations, and must be fine-tuned across various roles. ASD has identified four career streams (analysis, systems architecture, operations and testing) but these do not reflect the diversity of talents needed.

It’s clear Australian universities do not currently train people at the advanced levels needed by ASD, so advanced on-the-job training is essential.

Political window dressing

The government is promoting its announcement as the “nation’s largest ever investment in cyber security”. But the seemingly generous $1.35 billion cyber initiative does not involve new money.

The package is also a pre-announcement of part of the government’s upcoming 2020 Cyber Security Strategy, expected within weeks.

This will update the 2016 strategy released under former prime minister Malcolm Turnbull and cyber elements of the 2016 Defence White Paper.




Read more:
Australia is facing a looming cyber emergency, and we don’t have the high-tech workforce to counter it


The new cyber strategy has been the subject of country-wide consultations through 2019, but few observers expect significant new funding injections.

The main exceptions which may receive a funding boost compared with 2016 are likely to be in education funding (as opposed to research), and community awareness.

With the release of the new cyber strategy understood to be imminent, it is unclear why the government chose this particular week to make the pre-announcement. It obviously will have kept some big news for the strategy release when it happens.

The federal government is expected to release a new cyber security strategy within weeks.
http://www.shutterstock.com

The government’s claim that an additional $135 million per year is the “largest ever investment in cyber security” is true in a sense. But this is the case in many areas of government expenditure.

The government has obviously cut pre-planned expenses in some unrevealed areas of Defence.

Meanwhile, the issues this funding is supposed to address are so complex, that $1.35 billion over ten years can best be seen as an incremental response to a radical threat.

Australia needs to do much more

According to authoritative sources, including the federal government-funded AustCyber in 2019, there are a number of underlying deficiencies in Australia’s industrial and economic response to cyber security.

These can only be improved if federal government departments adopt stricter approaches, if state governments follow suit, and if the private sector makes appropriate adjustments.

Above all, the leading players need to shift their planning to better accommodate the organisational and management aspects of cyber security delivery.




Read more:
Australia is vulnerable to a catastrophic cyber attack, but the Coalition has a poor cyber security track record


Yes, we need to up our technical game, but our social response is also essential.

CEOs and departmental secretaries should be legally obliged to attest every year that they have sound cyber security practices and their entire organisations are properly trained.

Without better corporate management, Australia’s cyber defences will remain fragmented and inadequate.The Conversation

Greg Austin, Professor UNSW Canberra Cyber, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.