Control, cost and convenience determine how Australians use the technology in their homes



File 20190408 2935 1p2qvl7.jpg?ixlib=rb 1.1
Who’s the boss in a smart home?
Shutterstock/Tracy ben

Kate Letheren, Queensland University of Technology; Rebekah Russell-Bennett, Queensland University of Technology; Rory Mulcahy, University of the Sunshine Coast, and Ryan McAndrew, Queensland University of Technology

We have access to plenty of technology that can serve us by automating more of our daily lives, doing everything from adjusting the temperature of our homes to (eventually) putting groceries in our fridges.

But do we want these advancements? And – importantly – do we trust them?

Our research, published earlier this year in the European Journal of Marketing, looked at the roles technology plays in Australian homes. We found three main ways people assign control to, and trust in, their technology.




Read more:
One reason people install smart home tech is to show off to their friends


Most people still want some level of control. That’s an important message to developers if they want to keep increasing the uptake of smart home technology, yet to reach 25% penetration in Australia.

How smart do we want a home?

Smart homes are modern homes that have appliances or electronic devices that can be controlled remotely by the owner. Examples include lights controlled via apps, smart locks and security systems, and even smart coffee machines that remember your brew of choice and waking time.

But we still don’t understand consumer interactions with these technologies, and to speed up their adoption we need to know they type of value they can offer.

We conducted a set of studies in conjunction with CitySmart and a group of distributors, and we asked people about their smart technology preferences in the context of electricity management (managing appliances and utility plans).

We conducted 45 household interviews involving 116 people across Queensland, New South Wales, Western Australia, and Tasmania. Then we surveyed 1,345 Australian households. The interviews uncovered and explored the social roles assigned to technologies, while the survey allowed us to collect additional information and find out how the broader Australian population felt about these technology types.

We found households attribute social roles and rules to smart home technologies. This makes sense: the study of anthropomorphism tells us we tend to humanise things we want to understand. We humanise in order to trust (remember Clippy, the Microsoft paperclip with whom we all had a love-hate relationship?).

These social roles and rules determine whether (or how) households will adopt the technologies.

Tech plays three roles

Most people want technology to serve them (95.6% of interviewees, about 19 out of 20). Those who didn’t want any technology were classified as “resisters” and made up less than 5% of the respondents.

We found the role that technology can play in households tended to fall into one of three categories, the intern, the assistant and the manager:

  • the intern (passive technology)
    Technology exists to bring me information, but shouldn’t be making any decisions on its own. Real-life example: Switch your Thinking provides an SMS-based tip service. This mode of use was preferred in 22-35% of households.

  • the assistant (interactive technology)
    Technology should not only bring me information, but add value by helping me make decisions or interact. Real-life example: Homesmart from Ergon provides useful data to support consumers in their decisions; including remotely controlling appliances or monitoring electricity budget. This mode of use was preferred in 41-51% of households.

  • the manager (proactive technology)
    Technology should analyse information and make decisions itself, in order to make my life more efficient. Real-life example: Tibber, which learns your home’s electricity-usage pattern and helps you make adjustments. This mode of use was preferred in 22-24% of households.

Who’s the boss?

According to our study, while smart technology roles can change, the customer always remains the CEO. As CEO, they determine whether full control is retained or delegated to the technology.

For example, while two consumers might install a set of smart lights, one may engage by directly controlling lights via the app, while the other delegates this to the app – allowing it to choose based on sunset times when lights should be on.

Roles for consumer and smart technology – the consumer always remains the CEO, but technology can be viewed as an intern, assistant, or manager.
Natalie Sketcher, Visual Designer

Notably, time pressure was evident as justification for each of the three options. Passive technology saved time by not wasting it on fiddling with smart tech. Interactive technology gave information and controlled interactions for busy families. Proactive technology relieved overwhelmed households from managing their own electricity.

All households had clear motivation for their choices.

Households that chose passive technology were motivated by simplicity, cost-effectiveness and privacy concerns. One study participant in this group said:

Less hassle. Don’t like tech controlling my life.

Households prioritising interactive technology were looking for a balance of convenience and control, technology that provides:

A good support but allows me to maintain overall control and decision-making.

Households keen on proactive technology wanted set and forget abilities to allow the household to focus on the more important things in life. They sought:

Having the process looked after and managed for us as we don’t have the time to do it ourselves.

This raises the question: why did we see such differences in household preference?

Trust in tech

According to our research, this comes down to the relationship between trust, risk, and the need for control. It’s just that these motivations that are expressed differently in different households.

While one household sees delegating their choices as a safe bet (that is, trusting the technology to save them from the risk of electricity over-spend), another would see retaining all choices as the true expression of being in control (that is, believing humans should be trusted with decisions, with technology providing input only when asked).




Read more:
Smart speakers are everywhere — and they’re listening to more than you think


This is not unusual, nor is this the first study to find the importance of our sense of trust and risk in making technology decisions.

It’s not that consumers don’t want advancements to serve them – they do – but this working relationship requires clear roles and ground rules. Only then can there be trust.

For smart home technology developers, the message is clear: households will continue to expect control and customisation features so that the technology serves them – either as an intern, an assistant, or a manager – while they remain the CEO.

If you’re interested to discover your working relationship with technology, complete this three-question online quiz.The Conversation

Kate Letheren, Postdoctoral Research Fellow, Queensland University of Technology; Rebekah Russell-Bennett, Social Marketing Professor, School of Advertising, Marketing and Public Relations, Queensland University of Technology; Rory Mulcahy, Lecturer of Marketing, University of the Sunshine Coast, and Ryan McAndrew, Social Marketer & Market Researcher, Queensland University of Technology

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Advertisements

Fingerprint and face scanners aren’t as secure as we think they are



File 20190304 110110 1tgw1we.jpg?ixlib=rb 1.1
Biometric systems are increasingly used in our civil, commercial and national defence applications.
Shutterstock

Wencheng Yang, Edith Cowan University and Song Wang, La Trobe University

Despite what every spy movie in the past 30 years would have you think, fingerprint and face scanners used to unlock your smartphone or other devices aren’t nearly as secure as they’re made out to be.

While it’s not great if your password is made public in a data breach, at least you can easily change it. If the scan of your fingerprint or face – known as “biometric template data” – is revealed in the same way, you could be in real trouble. After all, you can’t get a new fingerprint or face.

Your biometric template data are permanently and uniquely linked to you. The exposure of that data to hackers could seriously compromise user privacy and the security of a biometric system.

Current techniques provide effective security from breaches, but advances in artificial intelligence (AI) are rendering these protections obsolete.




Read more:
Receiving a login code via SMS and email isn’t secure. Here’s what to use instead


How biometric data could be breached

If a hacker wanted to access a system that was protected by a fingerprint or face scanner, there are a number of ways they could do it:

  1. your fingerprint or face scan (template data) stored in the database could be replaced by a hacker to gain unauthorised access to a system

  2. a physical copy or spoof of your fingerprint or face could be created from the stored template data (with play doh, for example) to gain unauthorised access to a system

  3. stolen template data could be reused to gain unauthorised access to a system

  4. stolen template data could be used by a hacker to unlawfully track an individual from one system to another.

Biometric data need urgent protection

Nowadays, biometric systems are increasingly used in our civil, commercial and national defence applications.

Consumer devices equipped with biometric systems are found in everyday electronic devices like smartphones. MasterCard and Visa both offer credit cards with embedded fingerprint scanners. And wearable fitness devices are increasingly using biometrics to unlock smart cars and smart homes.

So how can we protect raw template data? A range of encryption techniques have been proposed. These fall into two categories: cancellable biometrics and biometric cryptosystems.




Read more:
When your body becomes your password, the end of the login is nigh


In cancellable biometrics, complex mathematical functions are used to transform the original template data when your fingerprint or face is being scanned. This transformation is non-reversible, meaning there’s no risk of the transformed template data being turned back into your original fingerprint or face scan.

In a case where the database holding the transformed template data is breached, the stored records can be deleted. Additionally, when you scan your fingerprint or face again, the scan will result in a new unique template even if you use the same finger or face.

In biometric cryptosystems, the original template data are combined with a cryptographic key to generate a “black box”. The cryptographic key is the “secret” and query data are the “key” to unlock the “black box” so that the secret can be retrieved. The cryptographic key is released upon successful authentication.

AI is making security harder

In recent years, new biometric systems that incorporate AI have really come to the forefront of consumer electronics. Think: smart cameras with built-in AI capability to recognise and track specific faces.

But AI is a double-edged sword. While new developments, such as deep artificial neural networks, have enhanced the performance of biometric systems, potential threats could arise from the integration of AI.

For example, researchers at New York University created a tool called DeepMasterPrints. It uses deep learning techniques to generate fake fingerprints that can unlock a large number of mobile devices. It’s similar to the way that a master key can unlock every door.

Researchers have also demonstrated how deep artificial neural networks can be trained so that the original biometric inputs (such as the image of a person’s face) can be obtained from the stored template data.




Read more:
Facial recognition is increasingly common, but how does it work?


New data protection techniques are needed

Thwarting these types of threats is one of the most pressing issues facing designers of secure AI-based biometric recognition systems.

Existing encryption techniques designed for non AI-based biometric systems are incompatible with AI-based biometric systems. So new protection techniques are needed.

Academic researchers and biometric scanner manufacturers should work together to secure users’ sensitive biometric template data, thus minimising the risk to users’ privacy and identity.

In academic research, special focus should be put on two most important aspects: recognition accuracy and security. As this research falls within Australia’s science and research priority of cybersecurity, both government and private sectors should provide more resources to the development of this emerging technology.The Conversation

Wencheng Yang, Post Doctoral Researcher, Security Research Institute, Edith Cowan University and Song Wang, Senior Lecturer, Engineering, La Trobe University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Huawei or the highway? The rising costs of New Zealand’s relationship with China



File 20190219 129545 fuc4q0.jpg?ixlib=rb 1.1
New Zealand’s Prime Minister Jacinda Ardern meeting with the Premier of the State Council of the People’s Republic of China Li Keqiang during last year’s ASEAN summit.
AAP Image/Mick Tsikas, CC BY-ND

David Belgrave, Massey University

Until recently, New Zealand’s relationship with China has been easy and at little cost to Wellington. But those days are probably over. New Zealand’s decision to block Huawei from its 5G cellular networks due to security concerns is the first in what could be many hard choices New Zealand will need to make that challenge Wellington’s relationship with Beijing.

For over a decade New Zealand has reaped the benefits of a free-trade agreement with China and seen a boom of Chinese tourists. China is New Zealand’s largest export destination and, apart from concerns about the influence of Chinese capital on the housing market, there have been few negatives for New Zealand.

Long-held fears that New Zealand would eventually have to “choose” between Chinese economic opportunities and American military security had not eventuated.




Read more:
New Zealand’s Pacific reset: strategic anxieties about rising China


But now New Zealand business people in China have warned of souring relations and the tourism industry is worried about a downturn due to backlash following the Huawei controversy.

China’s growing might

During Labour’s government under Helen Clark (1999-2008) and under the National government with John Key as prime minister (2008-2016), New Zealand could be all things to all people, building closer relationships with China while finally calming the last of the lingering American resentment over New Zealand’s anti-nuclear policies. But now, there are difficult decisions to be made.

As China becomes more assertive on the world stage, it is becoming increasingly difficult for New Zealand to keep up this balancing act. Two forces are pushing a more demanding line from Beijing. One is China’s move to assert more control over waters well off its coast.

For decades, Beijing was happy to let the US Navy maintain order over the Western Pacific to facilitate global trade with China. As China’s own economic and military abilities have grown, it has begun to show that it is willing to protect what it sees as its own patch. Its mammoth island building in the South China Sea is a testament to its new-found desire to push its territorial claims after decades of patience.




Read more:
Despite strong words, the US has few options left to reverse China’s gains in the South China Sea


China’s stronger foreign policy is testing what is known as the “rules-based order”, essentially a set of agreed rules that facilitate diplomacy, global trade, and resolve disputes between nations. This is very concerning for New Zealand as it needs stable rules to allow it to trade with the world. New Zealand doesn’t have the size to bully other countries into getting what we want.

Trump-style posturing would get New Zealand nowhere. A more powerful China doesn’t need to threaten the rules-based system, but the transition could create uncertainty for business and higher risks of trade disruption. It is vital for New Zealand that an Asia-Pacific dominated by China is as orderly as one dominated by the US.

Tech made in China

The other force challenging the relationship is China’s emergence as a source of technology rather than simply a manufacturer of other countries’ goods. Many Chinese firms like Huawei are now direct competitors of Western tech companies. Huawei’s success makes it strategically important for Beijing and a point of pride for ordinary Chinese citizens.

Yet, unlike Western countries, China actively monitors its population through a wide variety of mass surveillance technology. Therefore, there is a trust problem when Chinese firms claim that their devices are secure from Beijing’s spies. New Zealand’s decision to effectively ban Huawei components from 5G cellular networks could be the first in many decisions needed to ensure national security.

Chinese designed goods are becoming more common and issues around privacy and national security will get stronger as everyday household goods become connected to the internet. Restrictions on Chinese-made goods will further frustrate Beijing and will invite greater retaliation to New Zealand exporters and tourist operators.

In more extreme cases, foreign nationals have been detained in China in response to overseas arrests of prominent Chinese individuals. As many as 13 Canadians were detained recently in China following the arrest of Huawei’s CFO Meng Wanzhou in Vancouver at the request of US prosecutors.




Read more:
Australian-Chinese author’s detention raises important questions about China’s motivations


Declaring the limits of the relationship

If New Zealand is to maintain a healthy relationship with China, it needs to be clear on what it is not willing to accept. It is easy to say individual privacy, national security and freedom of speech are vital interests of New Zealand, but Wellington needs to be clear to its citizens and to China what exactly those concepts mean in detail. All relationships require compromise, so Wellington needs to be direct about what it won’t compromise.

New Zealand spent decades during the Cold War debating how much public criticism of the US the government could allow itself before it risked its alliance with the Americans. New Zealanders wondered if they really had an independent foreign policy if they couldn’t stand up to their friends. Eventually nationalist sentiment spilled over in the form of the anti-nuclear policy.

New Zealand is now heading for the same debate as Kiwis worry about how much they can push back against Beijing’s interests before it starts to hurt the economy. Now that the relationship with China is beginning to have significant costs as well as benefits, it’s probably time New Zealanders figured out how much they are prepared to pay for an easy trading relationship with China.The Conversation

David Belgrave, Lecturer in Politics and Citizenship, Massey University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Travelling overseas? What to do if a border agent demands access to your digital device



File 20181005 52691 12zqgzn.jpg?ixlib=rb 1.1
New laws enacted in New Zealand give customs agents the right to search your phone.
Shutterstock

Katina Michael, Arizona State University

New laws enacted in New Zealand this month give border agents the right to demand travellers entering the country hand over passwords for their digital devices. We outline what you should do if it happens to you, in the first part of a series exploring how technology is changing tourism.


Imagine returning home to Australia or New Zealand after a long-haul flight, exhausted and red-eyed. You’ve just reclaimed your baggage after getting through immigration when you’re stopped by a customs officer who demands you hand over your smartphone and the password. Do you know your rights?

Both Australian and New Zealand customs officers are legally allowed to search not only your personal baggage, but also the contents of your smartphone, tablet or laptop. It doesn’t matter whether you are a citizen or visitor, or whether you’re crossing a border by air, land or sea.




Read more:
How to protect your private data when you travel to the United States


New laws that came into effect in New Zealand on October 1 give border agents:

…the power to make a full search of a stored value instrument (including power to require a user of the instrument to provide access information and other information or assistance that is reasonable and necessary to allow a person to access the instrument).

Those who don’t comply could face prosecution and NZ$5,000 in fines. Border agents have similar powers in Australia and elsewhere. In Canada, for example, hindering or obstructing a border guard could cost you up to C$50,000 or five years in prison.

A growing trend

Australia and New Zealand don’t currently publish data on these kinds of searches, but there is a growing trend of device search and seizure at US borders. There was a more than fivefold increase in the number of electronic device inspections between 2015 and 2016 – bringing the total number to 23,000 per year. In the first six months of 2017, the number of searches was already almost 15,000.

In some of these instances, people have been threatened with arrest if they didn’t hand over passwords. Others have been charged. In cases where they did comply, people have lost sight of their device for a short period, or devices were confiscated and returned days or weeks later.




Read more:
Encrypted smartphones secure your identity, not just your data


On top of device searches, there is also canvassing of social media accounts. In 2016, the United States introduced an additional question on online visa application forms, asking people to divulge social media usernames. As this form is usually filled out after the flights have been booked, travellers might feel they have no choice but to part with this information rather than risk being denied a visa, despite the question being optional.

There is little oversight

Border agents may have a legitimate reason to search an incoming passenger – for instance, if a passenger is suspected of carrying illicit goods, banned items, or agricultural products from abroad.

But searching a smartphone is different from searching luggage. Our smartphones carry our innermost thoughts, intimate pictures, sensitive workplace documents, and private messages.

The practice of searching electronic devices at borders could be compared to police having the right to intercept private communications. But in such cases in Australia, police require a warrant to conduct the intercept. That means there is oversight, and a mechanism in place to guard against abuse. And the suspected crime must be proportionate to the action taken by law enforcement.

What to do if it happens to you

If you’re stopped at a border and asked to hand over your devices and passwords, make sure you have educated yourself in advance about your rights in the country you’re entering.

Find out whether what you are being asked is optional or not. Just because someone in a uniform asks you to do something, it does not necessarily mean you have to comply. If you’re not sure about your rights, ask to speak to a lawyer and don’t say anything that might incriminate you. Keep your cool and don’t argue with the customs officer.




Read more:
How secure is your data when it’s stored in the cloud?


You should also be smart about how you manage your data generally. You may wish to switch on two-factor authentication, which requires a password on top of your passcode. And store sensitive information in the cloud on a secure European server while you are travelling, accessing it only on a needs basis. Data protection is taken more seriously in the European Union as a result of the recently enacted General Data Protection Regulation.

Microsoft, Apple and Google all indicate that handing over a password to one of their apps or devices is in breach of their services agreement, privacy management, and safety practices. That doesn’t mean it’s wise to refuse to comply with border force officials, but it does raise questions about the position governments are putting travellers in when they ask for this kind of information.The Conversation

Katina Michael, Professor, School for the Future of Innovation in Society & School of Computing, Informatics and Decision Systems Engineering, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Use this app twice daily’: how digital tools are revolutionising patient care



File 20180718 142417 mehnhy.jpg?ixlib=rb 1.1
New electronic devices are being used by people of all ages to track activity, measure sleep and record nutrition.
Shutterstock

Caleb Ferguson, Western Sydney University; Debra Jackson, University of Technology Sydney, and Louise Hickman, University of Technology Sydney

Imagine you’ve recently had a heart attack.

You’re a lucky survivor. You’ve received high-quality care from nurses and doctors whilst in hospital and you’re now preparing to go home with the support of your family.

The doctors have made it clear that the situation is grim. It’s a case of: change your lifestyle or die. You’ve got to stop smoking, increase your physical activity, eat a healthy balanced diet (whilst reducing your salt), and make sure you take all your medicine as prescribed.




Read more:
Evidence-based medicine is broken: why we need data and technology to fix it


But before you leave the hospital, the cardiology nurse wants to talk to you. There are a few apps you can download on your smartphone that will help you manage your recovery, including the transition from hospital to home and all the health-related behavioural changes necessary to reduce the risk of another heart attack.

Rapid advancements in digital technologies are revolutionising healthcare. The benefits are numerous, but the rate of development is difficult to keep up with. And that’s creating challenges for both healthcare professionals and patients.

What are digital therapeutics?

Digital therapeutics can be defined as any intervention that is digitally delivered and has a therapeutic effect on a patient. They can be used to treat medical conditions in a similar way to drugs or surgery.

Current examples of digital therapeutics include apps for managing medications and cardiovascular health, apps to support mental health and well being, or augmented and virtual reality tools for patient education.

Paper-based letters, health records, prescription charts and education pamphlets are outdated. We can now send emails, enter information into electronic databases and access electronic medication charts.

And patient education is no longer a static, one-way communication. The digital revolution facilitates dynamic and personalised education, and a two-way interaction between patient and therapist.

How do digital therapeutics help?

Digital health care improves overall quality of care, even in cases where a patient lives hundreds of kilometres away from their doctor.

Take diabetes for example. This condition affects 1.7 million Australians. It’s a major risk factor for developing cardiovascular disease and stroke. So it’s important that people with diabetes manage their condition to reduce their risk.

A recent study evaluated a team-based online game, which was delivered by an app to provide diabetes self-management education. The participants who received the app in this trial had meaningful and sustained improvements in their diabetes, as measured by their HbA1c (blood glucose levels).

App based games of this kind hold promise to improve chronic disease outcomes at scale.

New electronic devices are also being used by people of all ages to track activity, measure sleep and record nutrition. This information provides instant and accurate feedback to individuals and their therapists, allowing for adjustments where necessary. The logged information can also be combined into large data sets to reveal patterns over time and inform future treatments.




Read more:
How virtual reality spiders are helping people face their arachnophobia


Digital therapeutics are spawning a new language within the healthcare industry. “Connected health” reflects the increasingly digital ways clinicians and patients communicate. A few examples include text messaging, telehealth, and video consultations with health professionals.

There is increasing evidence that digitally delivered care (including apps and text message based interventions) can be good for your health and can help you manage chronic conditions, such as diabetes and cardiovascular disease.

But not all health apps are the same

Whilst the digital health revolution is exciting, results of research studies should be carefully interpreted by patients and providers.

Innovation has led to 325,000 mobile health apps available in 2017. This raises significant governance issues relating to patient safety (including data protection) when using digital therapeutics.

A recent review identified that most studies have a relatively short duration of intervention and only reflect short-term follow up with participants. The long-term effect of these new therapeutic interventions remains largely unknown.

The current speed of technological development means the usual safety mechanisms face new ethical and regulatory challenges. Who is doing the prescribing? Who is responsible for the efficacy, storage and accuracy of data? How are these technologies being integrated into existing care systems?

Digital health needs a collaborative approach

Digital health presents seismic disruption to patient care, particularly when new technologies are cheap and readily accessible to patients who might lack the insight required to recognise normality or cause for alarm. Technology can be enabling and empowering for self management, however there’s a lot more needs to be done to link these new technologies into the current health system.

Take the new Apple Watch functionality of heart rate notifications for example. Research like the Apple Heart Study suggests this exciting innovation could lead to significantly improved detection rates of heart rhythm disorders, and enhanced stroke prevention efforts.

But when a patient receives a high heart rate notification, what should they do? Ignore it? Go to a GP? Head straight to the emergency department? And, what is the flow on impact on the health system?




Read more:
Why virtual reality won’t replace cadavers in medical school


Many of these questions remain unanswered suggesting there is an urgent need for research that examines how technology is implemented into existing healthcare systems.

The ConversationIf we are to produce useful digital therapeutics for real-world problems, then it is critical that the end-users are engaged in the process. Patients and healthcare professionals will need to work with software developers to design applications that meet the complex healthcare needs of patients.

Caleb Ferguson, Senior Research Fellow, Western Sydney University; Debra Jackson, Professor, University of Technology Sydney, and Louise Hickman, Associate Professor of Nursing, University of Technology Sydney

This article was originally published on The Conversation. Read the original article.

Let the light shine on super-fast wireless connections



File 20180724 194140 11t9bop.jpg?ixlib=rb 1.1
Light can be used as a high-speed form of wireless communication.
Shutterstock/ra2studio

Thas Ampalavanapillai Nirmalathas, University of Melbourne; Christina Lim, University of Melbourne, and Elaine Wong, University of Melbourne

We live in a world of wireless communications, from the early days of radio to new digital television, Wi-Fi and the latest 4G (soon to be 5G) connected smart devices.

But there are limits to this wireless world. With the prediction of 12 billion mobile-connected devices by 2021 and a projected sevenfold increase in wireless traffic, the search is on for any new method of wireless connectivity.

One solution could be right before our very eyes, if only we could see it.




Read more:
The 5G network threatens to overcrowd the airwaves, putting weather radar at risk


Current wireless connections

All wireless applications – such as mobile communications, Wi-Fi, broadcasting, and sensing – rely on some form of electromagnetic radiation.

The difference between these applications is simply the frequency of the signal (the carrier frequency) used in the electromagnetic radiation.

For example, current mobile phones sold as 3G and 4G operate in the lower microwave frequency bands (850MHz, 1.8GHz, 2-2.5GHz). A wireless local area network such as Wi-Fi operates in the 2.4GHz and 5GHz bands, whereas digital terrestrial television operates at 600-620MHz.

The spectrum of electromagnetic radiation covers a very broad range of frequencies and some of these are selected for specific applications.

These frequency regions are highly contested and valuable resources for wireless applications.

Running out of spectrum

Our current spectrum use in the lower microwave region will soon be heavily congested, even exhausted. It would be difficult to squeeze any more spare spectrum for any wireless application.

To carry an information content on to one of these frequencies, the frequency bands need sufficient bandwidth – the amount of information that can be transmitted – to meet future requirements. At the lower end of the spectrum, there are insufficient bandwidths to meet speeds exceeding gigabits per second.

At the higher end of the spectrum, ionising radiation such as x-rays and gamma rays cannot be used because of safety issues.

Despite current 4G wireless standard promising more shared capacity (1Gb/s), the projected demand and traffic volume already pushes the existing infrastructure to its ultimate limit. The future promise of 5G communication only adds to the problem.

A major rethink of the current wireless technologies is needed to meet these challenging requirements.

Let there be light!

The wireless transmission of optical signals has emerged as a viable option. It offers advantages not possible with current wireless technologies.

Optical wireless promises greater speed, higher throughput, and potentially lower energy consumption. Leveraging on existing optical wired infrastructures (namely optical fibre cables and networks), optical wireless connectivity can provide a seamless high capacity to end-users.

An example would be using optical wireless connectivity inside buildings to complement fibre-to-the-home deployments.

Optical wireless networks would be immune to electromagnetic interference and so could be deployed in radio frequency (RF) sensitive environments. You’ve probably seen those warning signs asking you not to use your mobile phone in hospitals, aircraft and other areas where equipment is sensitive to interference.

Optical wireless communications can be divided into visible light and infrared systems.

And let there be sight

A common issue with both is that devices need to be in the line of sight, as any physical obstruction can result in the loss of transmission. You may have experienced this issue when attempting to change a channel on TV if someone or something gets in the way of your remote.

Visible light communication (VLC) relies on LEDs that are also used for lighting. For example, by flashing LED lights located in the ceiling of a room at a rate much higher than can be discerned by the human eye, information can be conveyed to detectors around the room.

The major limitation of VLC is the limited bandwidth of commercially available white LED (~100 of MHz) that limits the transmission speeds.

Infrared communication systems have ample bandwidth with the potential of transmission tens of Gb/s per user. Despite the major advantage over VLC, the need for line-of-sight has seen this technology under-developed. Until now.

To overcome this we have demonstrated an infrared-based optical wireless communication link that can support a user on the move. By using a pair of access points with some spatial separation, any blockage of beams can be easily overcome as users hop from beam to beam freely.

Optical wireless systems can be built to make sure there is a secure wireless transmission. Using efficient wireless protocols it’s possible to transmit data without any delay and to allow users to move within a building while enjoying high speed wireless coverage.

Optical wireless in action

We will in future be using a range of devices, such as virtual reality (VR) and augmented reality (AR) devices, that all require superfast wireless connections.

For example, these new user interfaces are poised to make a big difference to the way museums and galleries will operate in the future. Currently, most of these platforms are linked via wired connections. But wireless interfaces will make them more easy to be used in applications.

The uptake of optical wireless as a viable communications technology can also drive further possibilities of using low-cost optical wireless transceivers to substitute expensive optical fibre rollout in rural and regional broadband contexts.

The ConversationThe integrated transceivers for infrared optical wireless communications are still under development and more effort is needed to speed up such integration efforts. But the researcher teams here and abroad are trying to make advances in the way such systems can be used in realistic scenarios.

Thas Ampalavanapillai Nirmalathas, Director – Networked Society Institute and Professor of Electrical and Electronic Engineering, University of Melbourne; Christina Lim, Professor, University of Melbourne, and Elaine Wong, Associate Dean, Diversity and Inclusion, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Taking on big tech: where does Australia stand?


File 20180724 189316 v0bbv.jpg?ixlib=rb 1.1
How will Australia rule when it comes to big tech?
shutterstock

Caron Beaton-Wells, University of Melbourne

Big tech is under fire in Europe. In its latest sting, the European Commission has slapped Google with an eye-watering €4.3 billion (AU$6.8 billion) fine for anti-competitive tying of its Android operating system to its in-house search engine and web browser.

The decision follows the Commission’s €2.4 billion (AU$3.5 billion) fine against the company for giving illegal advantage to its comparison shopping service, just over a year ago.

And the search company is not alone in feeling the heat from Brussels. Apple, Amazon, Facebook and Microsoft have all been on the receiving end of what some see as a “techlash” reflecting anti-US bias and protectionism.

So far, US competition authorities have taken a far more restrained approach. The Federal Trade Commission looked into various Google practices in 2012 and found it had no case to raise around search bias. Antitrust officials are instead encouraging vigilance but caution when it comes to intervening in data-driven markets characterised by high rates of innovation.

Closer to home, the Australian Competition and Consumer Commission (ACCC) is conducting an inquiry into the impact of digital platforms on media and advertising markets. It is attracting intense interest, not just here but abroad. There are also reports of the ACCC separately investigating Google’s data-harvesting practices.




Read more:
What consumers need from the ACCC inquiry into Google and Facebook


Will the ACCC follow the US or European approach?

It is tempting to speculate about the outcomes of the inquiry in those terms, but to do so would be a mistake.

There are differences in the substantive laws across jurisdictions. The Australian rule on misuse of market power, for example, is not an exact replica of either its US or EU counterpart.

What’s more, the law in any country can only be understood by considering its ideological roots, the political and socio-cultural conditions in which it is shaped, and the institutional framework that determines its application. In other words, history and context matter.

Divergence between the US and European Union on how to deal with large powerful companies is nothing new and, in context, not all that surprising.

In the US competition is about consumer welfare

US antitrust laws were introduced in response to the economic and political power of “big business” and what was seen as a need to protect the “little guy” from a few “robber barons”.

US antitrust laws were created to prevent big companies from dominating the market.
Shutterstock

However, since the 1970s, under the influence of the Chicago school, commitment to economic efficiency in the interests of consumer welfare has become the singular goal of antitrust.

In practice, this has meant agencies and courts have stressed a ground rule of trusting markets to self-correct and erring on the side of false negatives rather than false positives. Where there is intervention, it is to protect competition and not competitors.

In recent times, there has been growing dissatisfaction in some circles about the levels of concentration in the US economy and the role that permissive antitrust has played in creating so-called “data-opolies”.

Nevertheless, Chicagoan themes continue to underpin self-restraint on the part of US antitrust agencies, including when it comes to big tech.

In Europe fairness counts too

While EU competition laws are also concerned about the consumer, they are more pluralistic in their approach. This reflects their experience in the aftermath of the second world war, and the single market project.

EU-style antitrust has therefore always been based on – and continues to reflect – more normative values than the US, protecting ideas like economic freedom and fairness.

Fairness in this context, however, is not necessarily about protecting the losers from a legitimate competitive process. It is about protecting the right to equal opportunities for efficient competitors, or merit-based competition on a level playing field.

It is also about ensuring fairness for consumers. Anti-competitive conduct, the European competition boss argues, is unfair because it deprives consumers of the power to arbitrate the marketplace.

Australian competition law has its own flavour

Born in the late 1970s, the modern version of Australian competition law has followed the Chicagoan song sheet, favouring economic efficiency for consumer welfare as its primary purpose.

However, in a relatively small economy, marked by oligopolistic structures and high concentration in key sectors, Australia has always struggled with a balancing act between promoting efficiency and protecting small business.

“Fair competition” (a version of the iconic “fair go”) is a phrase often heard in Australian competition law dialect. But it is not to be confused with propping up inefficient rivals at the expense of competition.

Unlike in many other countries, Australia’s competition rules live within a statute that also has rules to deal separately with ensuring small businesses and consumers are treated fairly.




Read more:
Australia’s consumer laws still don’t cover e-books and many other digital products


Under the Competition and Consumer Act, the competition, fair trading and consumer protection provisions are mutually reinforcing. Also unlike in either the US or EU, these provisions are enforced by a single agency, the ACCC.

Distinctive too is that the ACCC is an agency with substantial regulatory responsibilities in areas including communications and infrastructure. These may be relevant in a debate about whether powerful tech companies should be regulated like public utilities on the grounds that they provide services that are essential to consumers.

The ACCC’s inquiry will be holistic

Given this legislative and institutional framework, the ACCC’s take on big tech is likely to be a mix of US and EU approaches with more than a dash of homemade seasoning.

Competition

It will consider if platforms have market power, in which markets, and how such power is being exercised. Implications for the price, quality and choice of news for consumers will loom large.

It will consider what impact the proposed consumer right to data may have.

Consumer protection

It will also examine whether platforms are providing users with adequate levels of privacy and data protection.

Fair trading

The ACCC will look into whether the large platforms are behaving in a way that dampens innovation and investment incentives for start ups and smaller players.

Regulation

There will be consideration of whether platforms have an unfair advantage because the regulatory playing field is not even. Regulation of journalistic content and copyright will also come into play.




Read more:
News outlets air grievances and Facebook plays the underdog in ACCC inquiry


Most importantly, the inquiry will not be static in its focus. It will have a firm eye on potential long term trends in and impacts of technological change within an Australian context.

Within a broad holistic framework, the ACCC will examine these questions in an integrated way. And will take its time to “get the answers right”.


The ConversationA version of this article appears on Pursuit. Professor Caron Beaton-Wells will launch a podcast on 26 July about Competition Lore, focusing on the challenges of competition in a digital age. Listen to ACCC Chairman Rod Sims discuss the Digital Platforms Inquiry in episode three of the podcast.

Caron Beaton-Wells, Professor, Melbourne Law School, University of Melbourne

This article was originally published on The Conversation. Read the original article.

Here’s what a privacy policy that’s easy to understand could look like



File 20180606 137315 1d8kz0n.jpg?ixlib=rb 1.1
We need a simple system for categorising data privacy settings, similar to the way Creative Commons specifies how work can be legally shared.
Shutterstock

Alexander Krumpholz, CSIRO and Raj Gaire, CSIRO

Data privacy awareness has recently gained momentum, thanks in part to the Cambridge Analytica data breach and the introduction of the European Union’s General Data Protection Regulation (GDPR).

One of the key elements of the GDPR is that it requires companies to simplify their privacy related terms and conditions (T&Cs) so that they are understandable to the general public. As a result, companies have been rapidly updating their terms and conditions (T&Cs), and notifying their existing users.




Read more:
Why your app is updating its privacy settings and how this will affect businesses


On one hand, these new T&Cs are now simplified legal documents. On the other hand, they are still too long. Unfortunately, most of us have still skipped reading those documents and simply clicked “accept”.

Wouldn’t it be nice if we could specify our general privacy preferences in our devices, have them check privacy policies when we sign up for apps, and warn us if the agreements overstep?

This dream is achievable.

Creative Commons as a template

For decades, software was sold or licensed with Licence Agreements that were several pages long, written by lawyers and hard to understand. Later, software came with standardised licences, such as the GNU General Public Licence, Berkeley Software Distribution, or The Apache License. Those licences define users’ rights in different use cases and protect the provider from liabilities.

However, they were still hard to understand.

With the foundation of Creative Commons (CC) in 2001, a simplified licence was developed that reduced complex legal copyright agreements to a small set of copyright classes.

These licences are represented by small icons and short acronyms, and can be used for images, music, text and software. This helps creative users to immediately recognise how – or whether – they can use the licensed content in their own work.




Read more:
Explainer: Creative Commons


Imagine you have taken a photo and want to share it with others for non-commercial purposes only, such as to illustrate a story on a not-for-profit news website. You could licence your photo as CC BY-NC when uploading it to Flickr. In Creative Commons terms, the abbreviation BY (for attribution) requires the user to cite the owner and NC (non-commercial) restricts the use to non-commercial applications.

Internet search engines will index these attributes with the files. So, if I search for photos explicitly licensed with those restrictions, via Google for example, I will find your photo. This is possible because even the computers can understand these licences.

We need to develop Privacy Commons

Similar to Creative Commons licences under which creative content is given to others, we need Privacy Commons by which companies can inform users how they will use their data.

The Privacy Commons need to be legally binding, simple for people to understand and simple for computers to understand. Here are our suggestions for what a Privacy Commons might look like.

We propose that the Privacy Commons classifications cover at least three dimensions of private data: collection, protection, and spread.

What data is being collected?

This dimension is to specify what level of personal information is collected from the user, and is therefore at risk. For example, name, email, phone number, address, date of birth, biometrics (including photos), relationships, networks, personal preferences, and political opinions. The could be categorised at different levels of sensitivities.

How is your data protected?

This dimension specifies:

  • where your data stored – within an app, in one server, or in servers at multiple locations
  • how it is stored and transported – whether it is plain text or encrypted
  • how long the data is kept for – days, months, years or permanently
  • how the access to your data controlled within the organisation – this indicates the protection of your data against potentially malicious actors like hackers.

How is your data spread?

In other words, who is your data shared with? This dimension tells you whether or not the data is shared with third parties. If the data is shared, will it be de-identified appropriately? Is it shared for research purposes, or sold for commercial purposes? Are there any further controls in place after the data is shared? Will it be deleted by the third party when the user deletes it at the primary organisation?




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


Privacy Commons will help companies think about user privacy before offering services. It will also help solve the problem of communication about privacy in the same way that Creative Commons is solving the problems of licensing for humans and computers. Similar ideas have been discussed in the past, such as Mozilla. We need to revisit those thoughts in the contemporary context of the GDPR.

Such a system would allow you to specify Privacy Commons settings in the configuration of your children’s devices, so that only appropriate apps can be installed. Privacy Commons could also be applied to inform you about the use of your data gathered for other purposes like loyalty rewards cards, such as FlyBuys.

Of course, Privacy Commons will not solve everything.

For example, it will still be a challenge to address concerns about third party personal data brokers like Acxiom or Oracle collecting, linking and selling our data without most of us even knowing.

The ConversationBut at least it will be a step in the right direction.

Alexander Krumpholz, Senior Experimental Scientist, CSIRO and Raj Gaire, Senior Experimental Scientist, CSIRO

This article was originally published on The Conversation. Read the original article.