Web’s inventor says news media bargaining code could break the internet. He’s right — but there’s a fix


Tama Leaver, Curtin University

The inventor of the World Wide Web, Tim Berners-Lee, has raised concerns that Australia’s proposed News Media and Digital Platforms Mandatory Bargaining Code could fundamentally break the internet as we know it.

His concerns are valid. However, they could be addressed through minor changes to the proposed code.

How could the code break the web?

The news media bargaining code aims to level the playing field between media companies and online giants. It would do this by forcing Facebook and Google to pay Australian news businesses for content linked to, or featured, on their platforms.

In a submission to the Senate inquiry about the code, Berners-Lee wrote:

Specifically, I am concerned that the Code risks breaching a fundamental principle of the web by requiring payment for linking between certain content online. […] The ability to link freely — meaning without limitations regarding the content of the linked site and without monetary fees — is fundamental to how the web operates.

Currently, one of the most basic underlying principles of the web is there is no cost involved in creating a hypertext link (or simply a “link”) to any other page or object online.

When Berners-Lee first devised the World Wide Web in 1989, he effectively gave away the idea and associated software for free, to ensure nobody would or could charge for using its protocols.

He argues the news media bargaining code could set a legal precedent allowing someone to charge for linking, which would let the genie out of the bottle — and plenty more attempts to charge for linking to content would appear.

If the precedent were set that people could be charged for simply linking to content online, it’s possible the underlying principle of linking would be disrupted.

As a result, there would likely be many attempts by both legitimate companies and scammers to charge users for what is currently free.

While supporting the “right of publishers and content creators to be properly rewarded for their work”, Berners-Lee asks the code be amended to maintain the principle of allowing free linking between content.




Read more:
Google News favours mainstream media. Even if it pays for Australian content, will local outlets fall further behind?


Google and Facebook don’t just link to content

Part of the issue here is Google and Facebook don’t just collect a list of interesting links to news content. Rather the way they find, sort, curate and present news content adds value for their users.

They don’t just link to news content, they reframe it. It is often in that reframing that advertisements appear, and this is where these platforms make money.

For example, this link will take you to the original 1989 proposal for the World Wide Web. Right now, anyone can create such a link to any other page or object on the web, without having to pay anyone else.

But what Facebook and Google do in curating news content is fundamentally different. They create compelling previews, usually by offering the headline of a news article, sometimes the first few lines, and often the first image extracted.

For instance, here is a preview Google generates when someone searches for Tim Berners-Lee’s Web proposal:

Results page for the Google Search 'tim berners lee www proposal'.
This is a screen capture of the results page for the Google Search: ‘tim berners lee www proposal’.
Google

Evidently, what Google returns is more of a media-rich, detailed preview than a simple link. For Google’s users, this is a much more meaningful preview of the content and better enables them to decide whether they’ll click through to see more.

Another huge challenge for media businesses is that increasing numbers of users are taking headlines and previews at face value, without necessarily reading the article.

This can obviously decrease revenue for news providers, as well as perpetuate misinformation. Indeed, it’s one of the reasons Twitter began asking users to actually read content before retweeting it.

A fairly compelling argument, then, is that Google and Facebook add value for consumers via the reframing, curating and previewing of content — not just by linking to it.

Can the code be fixed?

Currently in the code, the section concerning how platforms are “Making content available” lists three ways content is shared:

  1. content is reproduced on the service
  2. content is linked to
  3. an extract or preview is made available.

Similar terms are used to detail how users might interact with content.

Extract showing the way 'Making content available' is defined in the Treasury Laws Amendment (News Media and Digital Platforms Mandatory Bargaining Code) Bill 2020
The News Media and Digital Platforms Mandatory Bargaining Code 2020 outlines three main ways by which platforms make news content available.
Australian Government

If we accept most of the additional value platforms provide to their users is in curating and providing previews of content, then deleting the second element (which just specifies linking to content) would fix Berners-Lee’s concerns.

It would ensure the use of links alone can’t be monetised, as has always been true on the web. Platforms would still need to pay when they present users with extracts or previews of articles, but not when they only link to it.

Since basic links are not the bread and butter of big platforms, this change wouldn’t fundamentally alter the purpose or principle of creating a more level playing field for news businesses and platforms.




Read more:
It’s not ‘fair’ and it won’t work: an argument against the ACCC forcing Google and Facebook to pay for news


In its current form, the News Media and Digital Platforms Mandatory Bargaining Code could put the underlying principles of the world wide web in jeopardy. Tim Berners-Lee is right to raise this point.

But a relatively small tweak to the code would prevent this, It would allow us to focus more on where big platforms actually provide value for users, and where the clearest justification lies in asking them to pay for news content.


For transparency, it should be noted The Conversation has also made a submission to the Senate inquiry regarding the News Media and Digital Platforms Mandatory Bargaining Code.The Conversation

Tama Leaver, Professor of Internet Studies, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Is news worth a lot or a little? Google and Facebook want to have it both ways


Tim Dwyer, University of Sydney

Executives from Google and Facebook have told a Senate committee they are prepared to take drastic action if Australia’s news media bargaining code, which would force the internet giants to pay news publishers for linking to their sites, comes into force.

Google would have “no real choice” but to cut Australian users off entirely from its flagship search engine, the company’s Australian managing director Mel Silva told the committee. Facebook representatives in turn said they would remove links to news articles from the newsfeed of Australian users if the code came into effect as it currently stands.




Read more:
Expect delays and power plays: Google and Facebook brace as news media bargaining code is set to become law


In response, the Australian government shows no sign of backing down, with Prime Minister Scott Morrison and Treasurer Josh Frydenberg both saying they won’t respond to threats.

So what’s going on here? Are Google and Facebook really prepared to pull services from their Australian users rather than hand over some money to publishers under the bargaining code?

Is news valuable to Facebook and Google?

Facebook claims news is of little real value to its business. It doesn’t make money from news directly, and claims that for an average Australian user less than 5% of their newsfeed is made up of links to Australian news.

But this is hard to square with other information. In 2020, the University of Canberra’s Digital News Report found some 52% of Australians get news via social media, and the number is growing. Facebook also boasts of its investments in news via deals with publishers and new products such as Facebook News.

Google likewise says it makes little money from news, while at the same time investing heavily in news products like News Showcase.

So while links to news may not be direct advertising money-spinners for Facebook or Google, both see the presence of news as an important aspect of audience engagement with their products.

On their own terms

While both companies are prepared to give some money to news publishers, they want to make deals on their own terms. But Google and Facebook are two of the largest and most profitable companies in history – and each holds far more bargaining power than any news publisher. The news media bargaining code sets out to undo this imbalance.

What’s more, Google and Facebook don’t appear to want to accept the unique social role of news, and public interest journalism in particular. Nor do they recognise they might be involved somehow in the decline of the news business over the past decade or two, instead pointing the finger at impersonal shifts in advertising technology.

The media bargaining code being introduced is far too systematic for them to want to accept it. They would rather pick and choose commercial agreements with “genuine commercial consideration”, and not be bound by a one-size-fits-all set of arbitration rules.




Read more:
Changing the rules to control monopolies could see the end of Facebook domination


A history of US monopolies

Google and Facebook dominate web search and social media, respectively, in ways that echo the great US monopolies of the past: rail in the 19th century, then oil and later telecommunications in the 20th. All these industries became fundamental forms of capitalist infrastructure for economic and social development. And all these monopolies required legislation to break them up in the public interest.

It’s unsurprising that the giant ad-tech media platforms don’t want to follow the rules, but they must acknowledge that their great wealth and power come with a moral responsibility to society. Making them face up to that responsibility will require government intervention.

Online pioneers Vint Cerf (now VP and Chief Internet Evangelist at Google) and Tim Berners-Lee (“inventor of the World Wide Web”) have also made submissions to the Senate committee advocating on behalf of the corporations. They made high-minded claims that the code will break the “free and open” internet.




Read more:
Web’s inventor says news media bargaining code could break the internet. He’s right — but there’s a fix


But today’s internet is hardly free and open: for most users “the internet” is huge corporate platforms like Google and Facebook. And those corporations don’t want Australian senators interfering with their business model.

Independent senator Rex Patrick hit the nail on the head when he asked why Google wouldn’t admit the fundamental issue was about revenue, rather than technical detail or questions of principle.

How seriously should we take threats to leave the Australian market?

Google and Facebook are prepared to go along with the Senate committee’s processes, so long as they can modify the arrangement. The don’t want to be seen as uncooperative.

The threat to leave (or as Facebook’s Simon Milner put it, the “explanation” of why they would be forced to do so) is their worst-case scenario. It seems likely they would risk losing significant numbers of users if they did so, or at least having them much less engaged – and hence producing less advertising revenue.

Google has already run small-scale experiments to test removing Australian news from search. This may be a demonstration that the threat to withdraw from Australia is serious, or at least, serious brinkmanship.

People know news is important, that it shapes their interactions with the world – and provides meaning and helps them navigate their lives. So who would Australians blame if Google and Facebook really do follow through? The government or the friendly tech giants they see every day? That’s harder to know.


For transparency, please note The Conversation has also made a submission to the Senate inquiry regarding the News Media and Digital Platforms Mandatory Bargaining Code.The Conversation

Tim Dwyer, Associate Professor, Department of Media and Communications, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

There’s no such thing as ‘alternative facts’. 5 ways to spot misinformation and stop sharing it online



Shutterstock

Mark Pearson, Griffith University

The blame for the recent assault on the US Capitol and President Donald Trump’s broader dismantling of democratic institutions and norms can be laid at least partly on misinformation and conspiracy theories.

Those who spread misinformation, like Trump himself, are exploiting people’s lack of media literacy — it’s easy to spread lies to people who are prone to believe what they read online without questioning it.

We are living in a dangerous age where the internet makes it possible to spread misinformation far and wide and most people lack the basic fact-checking abilities to discern fact from fiction — or, worse, the desire to develop a healthy skepticism at all.




Read more:
Stopping the spread of COVID-19 misinformation is the best 2021 New Year’s resolution


Journalists are trained in this sort of thing — that is, the responsible ones who are trying to counter misinformation with truth.

Here are five fundamental lessons from Journalism 101 that all citizens can learn to improve their media literacy and fact-checking skills:

1. Distinguishing verified facts from myths, rumours and opinions

Cold, hard facts are the building blocks for considered and reasonable opinions in politics, media and law.

And there are no such things as “alternative facts” — facts are facts. Just because a falsity has been repeated many times by important people and their affiliates does not make it true.

We cannot expect the average citizen to have the skills of an academic researcher, journalist or judge in determining the veracity of an asserted statement. However, we can teach people some basic strategies before they mistake mere assertions for actual facts.

Does a basic internet search show these assertions have been confirmed by usually reliable sources – such as non-partisan mainstream news organisations, government websites and expert academics?

Students are taught to look to the URL of more authoritative sites — such as .gov or .edu — as a good hint at the factual basis of an assertion.

Searches and hashtags in social media are much less reliable as verification tools because you could be fishing within the “bubble” (or “echo chamber”) of those who share common interests, fears and prejudices – and are more likely to be perpetuating myths and rumours.

2. Mixing up your media and social media diet

We need to be break out of our own “echo chambers” and our tendencies to access only the news and views of those who agree with us, on the topics that interest us and where we feel most comfortable.

For example, over much of the past five years, I have deliberately switched between various conservative and liberal media outlets when something important has happened in the US.

By looking at the coverage of the left- and right-wing media, I can hope to find a common set of facts both sides agree on — beyond the partisan rhetoric and spin. And if only one side is reporting something, I know to question this assertion and not just take it at face value.

3. Being skeptical and assessing the factual premise of an opinion

Journalism students learn to approach the claims of their sources with a “healthy skepticism”. For instance, if you are interviewing someone and they make what seems to be a bold or questionable claim, it’s good practice to pause and ask what facts the claim is based on.

Students are taught in media law this is the key to the fair comment defence to a defamation action. This permits us to publish defamatory opinions on matters of public interest as long as they are reasonably based on provable facts put forth by the publication.

The ABC’s Media Watch used this defence successfully (at trial and on appeal) when it criticised a Sydney Sun-Herald journalist’s reporting that claimed toxic materials had been found near a children’s playground.

This assessment of the factual basis of an opinion is not reserved for defamation lawyers – it is an exercise we can all undertake as we decide whether someone’s opinion deserves our serious attention and republication.




Read more:
Teaching children digital literacy skills helps them navigate and respond to misinformation


4. Exploring the background and motives of media and sources

A key skill in media literacy is the ability to look behind the veil of those who want our attention — media outlets, social media influencers and bloggers — to investigate their allegiances, sponsorships and business models.

For instance, these are some key questions to ask:

  • who is behind that think tank whose views you are retweeting?

  • who owns the online newspaper you read and what other commercial interests do they hold?

  • is your media diet dominated by news produced from the same corporate entity?

  • why does someone need to be so loud or insulting in their commentary; is this indicative of their neglect of important facts that might counter their view?

  • what might an individual or company have to gain or lose by taking a position on an issue, and how might that influence their opinion?

Just because someone has an agenda does not mean their facts are wrong — but it is a good reason to be even more skeptical in your verification processes.




Read more:
Why is it so hard to stop COVID-19 misinformation spreading on social media?


5. Reflecting and verifying before sharing

We live in an era of instant republication. We immediately retweet and share content we see on social media, often without even having read it thoroughly, let alone having fact-checked it.

Mindful reflection before pressing that sharing button would allow you to ask yourself, “Why am I even choosing to share this material?”

You could also help shore up democracy by engaging in the fact-checking processes mentioned above to avoid being part of the problem by spreading misinformation.The Conversation

Mark Pearson, Professor of Journalism and Social Media, Griffith Centre for Social and Cultural Research, Griffith University, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

83% of Australians want tougher privacy laws. Now’s your chance to tell the government what you want



Shutterstock

Normann Witzleb, Monash University

Federal Attorney-General Christian Porter has called for submissions to the long-awaited review of the federal Privacy Act 1988.

This is the first wide-ranging review of privacy laws since the Australian Law Reform Commission produced a landmark report in 2008.

Australia has in the past often hesitated to adopt a strong privacy framework. The new review, however, provides an opportunity to improve data protection rules to an internationally competitive standard.

Here are some of the ideas proposed — and what’s at stake if we get this wrong.




Read more:
It’s time for privacy invasion to be a legal wrong


Australians care deeply about data privacy

Personal information has never had a more central role in our society and economy, and the government has a strong mandate to update Australia’s framework for the protection of personal information.

In the Australian Privacy Commissioner’s 2020 survey, 83% of Australians said they’d like the government to do more to protect the privacy of their data.

The intense debate about the COVIDSafe app earlier this year also shows Australians care deeply about their private information, even in a time of crisis.

Privacy laws and enforcement can hardly keep up with the ever-increasing digitalisation of our lives. Data-driven innovation provides valuable services that many of us use and enjoy. However, the government’s issues paper notes:

As Australians spend more of their time online, and new technologies emerge, such as artificial intelligence, more personal information about individuals is being captured and processed, raising questions as to whether Australian privacy law is fit for purpose.

The pandemic has accelerated the existing trend towards digitalisation and created a range of new privacy issues including working or studying at home, and the use of personal data in contact tracing.

Australians are rightly concerned they are losing control over their personal data.

So there’s no question the government’s review is sorely needed.

Issues of concern for the new privacy review

The government’s review follows the Australian Competition and Consumer Commission’s Digital Platforms Inquiry, which found that some data practices of digital platforms are unfair and undermine consumer trust. We rely heavily on digital platforms such as Google and Facebook for information, entertainment and engagement with the world around us.

Our interactions with these platforms leave countless digital traces that allow us to be profiled and tracked for profit. The Australian Competition and Consumer Commission (ACCC) found that the digital platforms make it hard for consumers to resist these practices and to make free and informed decisions regarding the collection, use and disclosure of their personal data.

The government has committed to implement most of the ACCC’s recommendations for stronger privacy laws to give us greater consumer control.

However, the reforms must go further. The review also provides an opportunity to address some long-standing weaknesses of Australia’s privacy regime.

The government’s issues paper, released to inform the review, identified several areas of particular concern. These include:

  • the scope of application of the Privacy Act, in particular the definition of “personal information” and current private sector exemptions

  • whether the Privacy Act provides an effective framework for promoting good privacy practices

  • whether individuals should have a direct right to sue for a breach of privacy obligations under the Privacy Act

  • whether a statutory tort for serious invasions of privacy should be introduced into Australian law, allowing Australians to go to court if their privacy is invaded

  • whether the enforcement powers of the Privacy Commissioner should be strengthened.

While most recent attention relates to improving consumer choice and control over their personal data, the review also brings back onto the agenda some never-implemented recommendations from the Australian Law Reform Commission’s 2008 review.

These include introducing a statutory tort for serious invasions of privacy, and extending the coverage of the Privacy Act.

Exemptions for small business and political parties should be reviewed

The Privacy Act currently contains several exemptions that limit its scope. The two most contentious exemptions have the effect that political parties and most business organisations need not comply with the general data protection standards under the Act.

The small business exemption is intended to reduce red tape for small operators. However, largely unknown to the Australian public, it means the vast majority of Australian businesses are not legally obliged to comply with standards for fair and safe handling of personal information.

Procedures for compulsory venue check-ins under COVID health regulations are just one recent illustration of why this is a problem. Some people have raised concerns that customers’ contact-tracing data, in particular collected via QR codes, may be exploited by marketing companies for targeted advertising.

A woman uses a QR code at a restaurant
Under current privacy laws, cafe and restaurant operators are exempt from complying with certain privacy obligations.
Shutterstock

Under current privacy laws, cafe and restaurant operators are generally exempt from complying with privacy obligations to undertake due diligence checks on third-party providers used to collect customers’ data.

The political exemption is another area of need of reform. As the Facebook/Cambridge Analytica scandal showed, political campaigning is becoming increasingly tech-driven.

However, Australian political parties are exempt from complying with the Privacy Act and anti-spam legislation. This means voters cannot effectively protect themselves against data harvesting for political purposes and micro-targeting in election campaigns through unsolicited text messages.

There is a good case for arguing political parties and candidates should be subject to the same rules as other organisations. It’s what most Australians would like and, in fact, wrongly believe is already in place.




Read more:
How political parties legally harvest your data and use it to bombard you with election spam


Trust drives innovation

Trust in digital technologies is undermined when data practices come across as opaque, creepy or unsafe.

There is increasing recognition that data protection drives innovation and adoption of modern applications, rather than impedes it.

A woman looks at her phone in the twilight.
Trust in digital technologies is undermined when data practices come across as opaque, creepy, or unsafe.
Shutterstock

The COVIDSafe app is a good example.
When that app was debated, the government accepted that robust privacy protections were necessary to achieve a strong uptake by the community.

We would all benefit if the government saw that this same principle applies to other areas of society where our precious data is collected.


Information on how to make a submission to the federal government review of the Privacy Act 1988 can be found here.




Read more:
People want data privacy but don’t always know what they’re getting


The Conversation


Normann Witzleb, Associate Professor in Law, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How Australia can reap the benefits and dodge the dangers of the Internet of Things



Shutterstock

Kayleen Manwaring, UNSW and Peter Leonard, UNSW

The Internet of Things (IoT) is already all around us. Online devices have become essential in industries from manufacturing and healthcare to agriculture and environmental management, not to mention our own homes. Digital consulting firm Ovum estimates that by 2022 Australian homes will host more than 47 million IoT devices, and the value of the global market will exceed US$1 trillion.

The IoT presents great opportunities, but it brings many risks too. Problems include excessive surveillance, loss of privacy, transparency and control, and reliance on unsafe or unsuitable services or devices.




Read more:
Explainer: the Internet of Things


In some places, such as the European Union, Germany, South Korea and the United Kingdom, governments have been quick to develop policies and some limited regulation to take advantage of the technology and mitigate its harmful impacts.

Australia has been late to react. Even recent moves by the federal government to make IoT devices more secure have been far behind international developments.

A report launched today by the Australian Council of Learned Academies (ACOLA) may help get Australia up to speed. It supplies a wide-ranging, peer-reviewed base of evidence about opportunities, benefits and challenges the IoT presents Australia over the next decade.

Benefits of the Internet of Things

The report examines how we can improve our lives with IoT-related technologies. It explores a range of applications across Australian cities and rural, regional and remote areas.

Some IoT services are already available, such as the Smart Cities and Suburbs program run by local and federal governments. This program funds projects in areas such as traffic congestion, waste management and urban safety.

Health applications are also on the rise. The University of New England has piloted the remote monitoring of COVID-19 patients with mild symptoms using IoT-enabled pulse oximeters.

Augmented and virtual reality applications too are becoming more common. IoT devices can track carbon emissions in supply chains and energy use in homes. IoT services can also help governments make public transport infrastructure more efficient.

The benefits of the IoT won’t only be felt in cities. There may be even more to be gained in rural, regional and remote areas. IoT can aid agriculture in many ways, as well as working to prevent and manage bushfires and other environmental disasters. Sophisticated remote learning and health care will also benefit people outside urban areas.

While some benefits of the IoT will be felt everywhere, some will have more impact in cities and others in rural, remote and regional areas.
ACOLA, CC BY-NC

Opportunities for the Australian economy

The IoT presents critical opportunities for economic growth. In 2016-17, IoT activity was already worth A$74.3 billion to the Australian economy.

The IoT can facilitate more data-informed processes and automation (also known as Industry 4.0). This has immediate potential for substantial benefits.

One opportunity for Australia is niche manufacturing. Making bespoke products would be more efficient with IoT capability, which would let Australian businesses reach a consumer market with wide product ranges but low domestic volumes due to our small population.

Agricultural innovation enabled by the IoT, using Australia’s existing capabilities and expertise, is another promising area for investment.




Read more:
Six things every consumer should know about the ‘Internet of Things’


Risks of the Internet of Things

IoT devices can collect huge amounts of sensitive data, and controlling that data and keeping it secure presents significant risks. However, the Australian community is not well informed about these issues and some IoT providers are slow to explain appropriate and safe use of IoT devices and services.

These issues make it difficult for consumers to tell good practice from bad, and do not inspire trust in IoT. Lack of consistent international IoT standards can also make it difficult for different devices to work together, and creates a risk that users will be “locked in” to products from a single supplier.

In IoT systems it can also be very complex to determine who is responsible for any particular fault or issue, because of the many possible combinations of product, hardware, software and services. There will also be many contracts and user agreements, creating contractual complexity that adds to already difficult legal questions.




Read more:
Are your devices spying on you? Australia’s very small step to make the Internet of Things safer


The increased surveillance made possible by the IoT can lead to breaches of human rights. Partially or fully automated decision-making can also to discrimination and other socially unacceptable outcomes.

And while the IoT can assist environmental sustainability, it can also increase environmental costs and impacts. The ACOLA report estimates that by 2050 the IoT could consume between 1 and 5% of the world’s electricity.

Other risks of harmful social consequences include an increased potential for domestic violence, the targeting of children by malicious actors and corporate interests, increased social withdrawal and the exacerbation of existing inequalities for vulnerable populations. The recent death of a woman in rural New South Wales being treated via telehealth provides just one example of these risks.

Maximising the benefits of the IoT

The ACOLA report makes several recommendations for Australia to take advantage of the IoT while minimising its downsides.

ACOLA advocates a national approach, focusing on areas of strength. It recommends continuing investment in smart cities and regions, and more collaboration between industry, government and education.

ACOLA also recommends increased community engagement, better ethical and regulatory frameworks for data and baseline security standards.

The ACOLA report is only a beginning. More specific work needs to be done to make the IoT work for Australia and its citizens.

The report does outline key areas for future research. These include the actual experiences of people in smart cities and homes, the value of data, environmental impacts and the use of connected and autonomous vehicles.The Conversation

Kayleen Manwaring, Senior Lecturer, School of Taxation & Business Law, UNSW and Peter Leonard, Professor of Practice (IT Systems and Management and Business and Taxation Law), UNSW Business School, Sydney, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

3.2 billion images and 720,000 hours of video are shared online daily. Can you sort real from fake?



Twitter screenshots/Unsplash, Author provided

T.J. Thomson, Queensland University of Technology; Daniel Angus, Queensland University of Technology, and Paula Dootson, Queensland University of Technology

Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.

Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330”.

The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?

While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defence — and the only one you can control — is you.

Seeing shouldn’t always be believing

Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organisations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarised environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25% of journalists globally use social media content verification tools, according to the International Centre for Journalists.




Read more:
Facebook is tilting the political playing field more than ever, and it’s no accident


Could you spot a doctored image?

Consider this photo of Martin Luther King Jr.

This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Fully and partially synthetic content

Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

A team from the Massachusetts Institute of Technology created this fake video showing US President Richard Nixon reading lines from a speech crafted in case the 1969 moon landing failed. (Youtube)

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.

AI-generated faces.
These people don’t exist, they’re just images generated by artificial intelligence.
Generated Photos, CC BY

Editing pixel values and the (not so) simple crop

Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.

Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right).
AP

But what about edits that only alter pixel values such as colour, saturation or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded:

No racial implication was intended, by Time or by the artist.

Tools for debunking digital fakery

For those of us who don’t want to be duped by visual mis/disinformation, there are tools available — although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

  • relies on unedited copies of the media already being online
  • doesn’t search the entire web
  • doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
  • returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.



Read more:
Instead of showing leadership, Twitter pays lip service to the dangers of deep fakes


Most reliable tools are sophisticated

Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive and need specialised expertise.

Still, you can access work in this field by visiting sites such as Snopes.com — which has a growing repository of “fauxtography”.

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data”, but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

  • was it originally made for social media?
  • how widely and for how long was it circulated?
  • what responses did it receive?
  • who were the intended audiences?

Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.The Conversation

T.J. Thomson, Senior Lecturer in Visual Communication & Media, Queensland University of Technology; Daniel Angus, Associate Professor in Digital Communication, Queensland University of Technology, and Paula Dootson, Senior Lecturer, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

NBN upgrades explained: how will they make internet speeds faster? And will the regions miss out?



Shutterstock

Thas Ampalavanapillai Nirmalathas, University of Melbourne

The federal government has announced a A$3.5 billion upgrade to the National Broadband Network (NBN) that will grant two million households on-demand access to faster fibre-to-the-node (FTTN) internet by 2023.

Reports from the ABC suggest the plan would go as far as to upgrade the FTTN services to fibre-to-the-premises (FTTP) – although this wasn’t explicitly said in Minister for Communications Paul Fletcher’s announcement.

The minister said the upgrade would involve expanding current FTTN connections to run along more streets across the country, giving people the option to connect to broadband speeds of up to one gigabit per second. Improvements have also been promised for the hybrid fibre coaxial (HFC) and fibre-to-the-curb (FTTC) systems.

Altogether the upgrade is expected to give about six million households access to internet speeds of up to one gigabit per second. But how will the existing infrastructure be boosted? And who will miss out?

Getting ahead of the terminology

Let’s first understand the various terms used to describe aspects of the NBN network.

Fibre to the Premises (FTTP)

FTTP refers to households with an optical fibre connection running from a device on a wall of the house directly to the network. This provides reliable high-speed internet.

The “network” simply refers to the exchange point from which households’ broadband connections are passed to service providers, such as Telstra, who help them get connected.

In an FTTP network, fibre optic connectors in the back of distribution hub panels connect homes to broadband services.
Shutterstock

Fibre to the Node (FTTN)

The FTTN system serves about 4.7 million premises in Australia, out of a total 11.5 million covered under the NBN.

With FTTN, households are connected via a copper line to a “node” in their neighbourhood. This node is further connected to the network with fibre optic cables that transfer data much faster than copper cables can.

With FTTN systems, the quality of the broadband service depends on the length of the copper cable and the choice of technology used to support data transmission via this cable.

It’s technically possible to offer high internet speeds when copper cables are very short and the latest data transmission technologies are used.

In reality, however, Australia’s FTTN speeds using a fibre/copper mix have been slow. An FTTN connection’s reliability also depends on network conditions, such as the age of the copper cabling and whether any of the signal is leaking due to degradation.

Illustration of fibre optic cables.
Fibre optic cables use pulses of light for high-speed data transmission across long distances.
Shutterstock

Fibre to the Curb (FTTC)

The limitations of FTTN mentioned above can be sidestepped by extending fibre cables from the network right up to a curbside “distribution point unit” nearer to households. This unit then becomes the “node” of the network.

FTTC allows significantly faster data transmission. This is because it services relatively fewer households (allowing better signal transmission to each one) and reduces the length of copper cable relied upon.

Hybrid Fibre Coaxial (HFC)

In many areas, the NBN uses coaxial cables instead of copper cables. These were first installed by Optus and Telstra in the 1990s to deliver cable broadband and television. They’ve since been modernised for use in the NBN’s fibre network.

In theory, HFC systems should be able to offer internet speeds of more than 100 megabits per second. But many households have been unable to achieve this due to the poor condition of cabling infrastructure in some parts, as well as large numbers of households sharing a single coaxial cable.

Coaxial cables are the most limiting part of the HFC system. So expanding the length of fibre cabling (and shortening the coaxial cables being used) would allow faster internet speeds. The NBN’s 2020 corporate plan identifies doing this as a priority.

Minister Fletcher today said the planned upgrades would ensure all customers serviced by HFC would have access to speeds of up to one gigabit per second. Currently, only 7% of HFC customers do.

Mixing things up isn’t always a good idea

Under the original NBN plan, the Labor government in 2009 promised optical fibre connections for 93% of all Australian households.

Successive reviews led to the use of multiple technologies in the network, rather than the full-fibre network Labor envisioned. Many households are not able to upgrade their connection because of limitations to the technology available in their neighbourhood.




Read more:
The NBN: how a national infrastructure dream fell short


Also, many businesses currently served by FTTN can’t access internet speeds that meet their needs. To avoid internet speeds hindering their work, many businesses need a minimum speed between 100 megabits and 1 gigabit per second, depending on their scale.

Currently, no FTTN services and few HFC services can support such speeds.

Moreover, the Australian Competition and Consumer Commission’s NBN monitoring report published in May (during the pandemic) found in about 95% of cases, NBN plans only delivered 83-91% of the maximum advertised speed.

The report also showed 10% of the monitored services were underperforming – and 95% of these were FTTN services. This makes a strong case for the need to upgrade FTTN.

Who will benefit?

While the NBN’s most recent corporate plan identifies work to be done across its various offerings (FTTN, FTTC, HFC, fixed wireless), it’s unclear exactly how much each system stands to gain from today’s announcements.

Ideally, urban and regional households that can’t access 100 megabits per second speeds would be prioritised for fibre expansion. The expanded FTTN network should also cover those struggling to access reliable broadband in regional Australia.

Bringing fibre cabling to households in remote areas would be difficult. One option, however, could be to extend fibre connections to an expanded network of base stations in regional Australia, thereby improving the NBN’s fixed wireless connectivity capacity.

These base stations “beam” signals to nearby premises. Installing more stations would mean fewer premises covered by each (and therefore better connectivity for each).

Regardless, it’s important the upgrades happen quickly. Many NBN customers now working and studying from home will be waiting eagerly for a much-needed boost to their internet speed.




Read more:
How to boost your internet speed when everyone is working from home


The Conversation


Thas Ampalavanapillai Nirmalathas, Group Head – Electronic and Photonic Systems Group and Professor of Electrical and Electronic Engineering, University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Towards a post-privacy world: proposed bill would encourage agencies to widely share your data


Bruce Baer Arnold, University of Canberra

The federal government has announced a plan to increase the sharing of citizen data across the public sector.

This would include data sitting with agencies such as Centrelink, the Australian Tax Office, the Department of Home Affairs, the Bureau of Statistics and potentially other external “accredited” parties such as universities and businesses.

The draft Data Availability and Transparency Bill released today will not fix ongoing problems in public administration. It won’t solve many problems in public health. It is a worrying shift to a post-privacy society.

It’s a matter of arrogance, rather than effectiveness. It highlights deficiencies in Australian law that need fixing.




Read more:
Australians accept government surveillance, for now


Making sense of the plan

Australian governments on all levels have built huge silos of information about us all. We supply the data for these silos each time we deal with government.

It’s difficult to exercise your rights and responsibilities without providing data. If you’re a voter, a director, a doctor, a gun owner, on welfare, pay tax, have a driver’s licence or Medicare card – our governments have data about you.

Much of this is supplied on a legally mandatory basis. It allows the federal, state, territory and local governments to provide pensions, elections, parks, courts and hospitals, and to collect rates, fees and taxes.

The proposed Data Availability and Transparency Bill will authorise large-scale sharing of data about citizens and non-citizens across the public sector, between both public and private bodies. Previously called the “Data Sharing and Release” legislation, the word “transparency” has now replaced “release” to allay public fears.

The legislation would allow sharing between Commonwealth government agencies that are currently constrained by a range of acts overseen (weakly) by the under-resourced Australian Information Commissioner (OAIC).

The acts often only apply to specific agencies or data. Overall we have a threadbare patchwork of law that is supposed to respect our privacy but often isn’t effective. It hasn’t kept pace with law in Europe and elsewhere in the world.

The plan also envisages sharing data with trusted third parties. They might be universities or other research institutions. In future, the sharing could extend to include state or territory agencies and the private sector, too.

Any public or private bodies that receive data can then share it forward. Irrespective of whether one has anything to hide, this plan is worrying.

Why will there be sharing?

Sharing isn’t necessarily a bad thing. But it should be done accountably and appropriately.

Consultations over the past two years have highlighted the value of inter-agency sharing for law enforcement and for research into health and welfare. Universities have identified a range of uses regarding urban planning, environment protection, crime, education, employment, investment, disease control and medical treatment.

Many researchers will be delighted by the prospect of accessing data more cheaply than doing onerous small-scale surveys. IT people have also been enthusiastic about money that could be made helping the databases of different agencies talk to each other.

However, the reality is more complicated, as researchers and civil society advocates have pointed out.

Person hitting a 'share' button on a keyboard.
In a July speech to the Australian Society for Computers and Law, former High Court Justice Michael Kirby highlighted a growing need to fight for privacy, rather than let it slip away.
Shutterstock

Why should you be worried?

The plan for comprehensive data sharing is founded on the premise of accreditation of data recipients (entities deemed trustworthy) and oversight by the Office of the National Data Commissioner, under the proposed act.

The draft bill announced today is open for a short period of public comment before it goes to parliament. It features a consultation paper alongside a disquieting consultants’ report about the bill. In this report, the consultants refer to concerns and “high inherent risk”, but unsurprisingly appear to assume things will work out.

Federal Minister for Government Services Stuart Roberts, who presided over the tragedy known as the RoboDebt scheme, is optimistic about the bill. He dismissed critics’ concerns by stating consent is implied when someone uses a government service. This seems disingenuous, given people typically don’t have a choice.

However, the bill does exclude some data sharing. If you’re a criminologist researching law enforcement, for example, you won’t have an open sesame. Experience with the national Privacy Act and other Commonwealth and state legislation tells us such exclusions weaken over time

Outside the narrow exclusions centred on law enforcement and national security, the bill’s default position is to share widely and often. That’s because the accreditation requirements for agencies aren’t onerous and the bases for sharing are very broad.

This proposal exacerbates ongoing questions about day-to-day privacy protection. Who’s responsible, with what framework and what resources?

Responsibility is crucial, as national and state agencies recurrently experience data breaches. Although as RoboDebt revealed, they often stick to denial. Universities are also often wide open to data breaches.

Proponents of the plan argue privacy can be protected through robust de-identification, in other words removing the ability to identify specific individuals. However, research has recurrently shown “de-identification” is no silver bullet.

Most bodies don’t recognise the scope for re-identification of de-identified personal information and lots of sharing will emphasise data matching.

Be careful what you ask for

Sharing may result in social goods such as better cities, smarter government and healthier people by providing access to data (rather than just money) for service providers and researchers.

That said, our history of aspirational statements about privacy protection without meaningful enforcement by watchdogs should provoke some hard questions. It wasn’t long ago the government failed to prevent hackers from accessing sensitive data on more than 200,000 Australians.

It’s true this bill would ostensibly provide transparency, but it won’t provide genuine accountability. It shouldn’t be taken at face value.




Read more:
Seven ways the government can make Australians safer – without compromising online privacy


The Conversation


Bruce Baer Arnold, Assistant Professor, School of Law, University of Canberra

This article is republished from The Conversation under a Creative Commons license. Read the original article.

A computer can guess more than 100,000,000,000 passwords per second. Still think yours is secure?



Paul Haskell-Dowland, Author provided

Paul Haskell-Dowland, Edith Cowan University and Brianna O’Shea, Edith Cowan University

Passwords have been used for thousands of years as a means of identifying ourselves to others and in more recent times, to computers. It’s a simple concept – a shared piece of information, kept secret between individuals and used to “prove” identity.

Passwords in an IT context emerged in the 1960s with mainframe computers – large centrally operated computers with remote “terminals” for user access. They’re now used for everything from the PIN we enter at an ATM, to logging in to our computers and various websites.

But why do we need to “prove” our identity to the systems we access? And why are passwords so hard to get right?




Read more:
The long history, and short future, of the password


What makes a good password?

Until relatively recently, a good password might have been a word or phrase of as little as six to eight characters. But we now have minimum length guidelines. This is because of “entropy”.

When talking about passwords, entropy is the measure of predictability. The maths behind this isn’t complex, but let’s examine it with an even simpler measure: the number of possible passwords, sometimes referred to as the “password space”.

If a one-character password only contains one lowercase letter, there are only 26 possible passwords (“a” to “z”). By including uppercase letters, we increase our password space to 52 potential passwords.

The password space continues to expand as the length is increased and other character types are added.

Making a password longer or more complex greatly increases the potential ‘password space’. More password space means a more secure password.

Looking at the above figures, it’s easy to understand why we’re encouraged to use long passwords with upper and lowercase letters, numbers and symbols. The more complex the password, the more attempts needed to guess it.

However, the problem with depending on password complexity is that computers are highly efficient at repeating tasks – including guessing passwords.

Last year, a record was set for a computer trying to generate every conceivable password. It achieved a rate faster than 100,000,000,000 guesses per second.

By leveraging this computing power, cyber criminals can hack into systems by bombarding them with as many password combinations as possible, in a process called brute force attacks.

And with cloud-based technology, guessing an eight-character password can be achieved in as little as 12 minutes and cost as little as US$25.

Also, because passwords are almost always used to give access to sensitive data or important systems, this motivates cyber criminals to actively seek them out. It also drives a lucrative online market selling passwords, some of which come with email addresses and/or usernames.

You can purchase almost 600 million passwords online for just AU$14!

How are passwords stored on websites?

Website passwords are usually stored in a protected manner using a mathematical algorithm called hashing. A hashed password is unrecognisable and can’t be turned back into the password (an irreversible process).

When you try to login, the password you enter is hashed using the same process and compared to the version stored on the site. This process is repeated each time you login.

For example, the password “Pa$$w0rd” is given the value “02726d40f378e716981c4321d60ba3a325ed6a4c” when calculated using the SHA1 hashing algorithm. Try it yourself.

When faced with a file full of hashed passwords, a brute force attack can be used, trying every combination of characters for a range of password lengths. This has become such common practice that there are websites that list common passwords alongside their (calculated) hashed value. You can simply search for the hash to reveal the corresponding password.

This screenshot of a Google search result for the SHA hashed password value ‘02726d40f378e716981c4321d60ba3a325ed6a4c’ reveals the original password: ‘Pa$$w0rd’.

The theft and selling of passwords lists is now so common, a dedicated website — haveibeenpwned.com — is available to help users check if their accounts are “in the wild”. This has grown to include more than 10 billion account details.

If your email address is listed on this site you should definitely change the detected password, as well as on any other sites for which you use the same credentials.




Read more:
Will the hack of 500 million Yahoo accounts get everyone to protect their passwords?


Is more complexity the solution?

You would think with so many password breaches occurring daily, we would have improved our password selection practices. Unfortunately, last year’s annual SplashData password survey has shown little change over five years.

The 2019 annual SplashData password survey revealed the most common passwords from 2015 to 2019.

As computing capabilities increase, the solution would appear to be increased complexity. But as humans, we are not skilled at (nor motivated to) remember highly complex passwords.

We’ve also passed the point where we use only two or three systems needing a password. It’s now common to access numerous sites, with each requiring a password (often of varying length and complexity). A recent survey suggests there are, on average, 70-80 passwords per person.

The good news is there are tools to address these issues. Most computers now support password storage in either the operating system or the web browser, usually with the option to share stored information across multiple devices.

Examples include Apple’s iCloud Keychain and the ability to save passwords in Internet Explorer, Chrome and Firefox (although less reliable).

Password managers such as KeePassXC can help users generate long, complex passwords and store them in a secure location for when they’re needed.

While this location still needs to be protected (usually with a long “master password”), using a password manager lets you have a unique, complex password for every website you visit.

This won’t prevent a password from being stolen from a vulnerable website. But if it is stolen, you won’t have to worry about changing the same password on all your other sites.

There are of course vulnerabilities in these solutions too, but perhaps that’s a story for another day.




Read more:
Facebook hack reveals the perils of using a single account to log in to other services


The Conversation


Paul Haskell-Dowland, Associate Dean (Computing and Security), Edith Cowan University and Brianna O’Shea, Lecturer, Ethical Hacking and Defense, Edith Cowan University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can I still be hacked with 2FA enabled?



Shutterstock

David Tuffley, Griffith University

Cybersecurity is like a game of whack-a-mole. As soon as the good guys put a stop to one type of attack, another pops up.

Usernames and passwords were once good enough to keep an account secure. But before long, cybercriminals figured out how to get around this.

Often they’ll use “brute force attacks”, bombarding a user’s account with various password and login combinations in a bid to guess the correct one.

To deal with such attacks, a second layer of security was added in an approach known as two-factor authentication, or 2FA. It’s widespread now, but does 2FA also leave room for loopholes cybercriminals can exploit?

2FA via text message

There are various types of 2FA. The most common method is to be sent a single-use code as an SMS message to your phone, which you then enter following a prompt from the website or service you’re trying to access.

Most of us are familiar with this method as it’s favoured by major social media platforms. However, while it may seem safe enough, it isn’t necessarily.

Hackers have been known to trick mobile phone carriers (such as Telstra or Optus) into transferring a victim’s phone number to their own phone.




Read more:
$2.5 billion lost over a decade: ‘Nigerian princes’ lose their sheen, but scams are on the rise


Pretending to be the intended victim, the hacker contacts the carrier with a story about losing their phone, requesting a new SIM with the victim’s number to be sent to them. Any authentication code sent to that number then goes directly to the hacker, granting them access to the victim’s accounts.
This method is called SIM swapping. It’s probably the easiest of several types of scams that can circumvent 2FA.

And while carriers’ verification processes for SIM requests are improving, a competent trickster can talk their way around them.

Authenticator apps

The authenticator method is more secure than 2FA via text message. It works on a principle known as TOTP, or “time-based one-time password”.

TOTP is more secure than SMS because a code is generated on your device rather than being sent across the network, where it might be intercepted.

The authenticator method uses apps such as Google Authenticator, LastPass, 1Password, Microsoft Authenticator, Authy and Yubico.

However, while it’s safer than 2FA via SMS, there have been reports of hackers stealing authentication codes from Android smartphones. They do this by tricking the user into installing malware (software designed to cause harm) that copies and sends the codes to the hacker.

The Android operating system is easier to hack than the iPhone iOS. Apple’s iOS is proprietary, while Android is open-source, making it easier to install malware on.

2FA using details unique to you

Biometric methods are another form of 2FA. These include fingerprint login, face recognition, retinal or iris scans, and voice recognition. Biometric identification is becoming popular for its ease of use.

Most smartphones today can be unlocked by placing a finger on the scanner or letting the camera scan your face – much quicker than entering a password or passcode.

However, biometric data can be hacked, too, either from the servers where they are stored or from the software that processes the data.

One case in point is last year’s Biostar 2 data breach in which nearly 28 million biometric records were hacked. BioStar 2 is a security system that uses facial recognition and fingerprinting technology to help organisations secure access to buildings.

There can also be false negatives and false positives in biometric recognition. Dirt on the fingerprint reader or on the person’s finger can lead to false negatives. Also, faces can sometimes be similar enough to fool facial recognition systems.

Another type of 2FA comes in the form of personal security questions such as “what city did your parents meet in?” or “what was your first pet’s name?”




Read more:
Don’t be phish food! Tips to avoid sharing your personal information online


Only the most determined and resourceful hacker will be able to find answers to these questions. It’s unlikely, but still possible, especially as more of us adopt public online profiles.

Person looks at a social media post from a woman, on their mobile.
Often when we share our lives on the internet, we fail to consider what kinds of people may be watching.
Shutterstock

2FA remains best practice

Despite all of the above, the biggest vulnerability to being hacked is still the human factor. Successful hackers have a bewildering array of psychological tricks in their arsenal.

A cyber attack could come as a polite request, a scary warning, a message ostensibly from a friend or colleague, or an intriguing “clickbait” link in an email.

The best way to protect yourself from hackers is to develop a healthy amount of scepticism. If you carefully check websites and links before clicking through and also use 2FA, the chances of being hacked become vanishingly small.

The bottom line is that 2FA is effective at keeping your accounts safe. However, try to avoid the less secure SMS method when given the option.

Just as burglars in the real world focus on houses with poor security, hackers on the internet look for weaknesses.

And while any security measure can be overcome with enough effort, a hacker won’t make that investment unless they stand to gain something of greater value.The Conversation

David Tuffley, Senior Lecturer in Applied Ethics & CyberSecurity, Griffith University

This article is republished from The Conversation under a Creative Commons license. Read the original article.