Humans have a long history of living on water. Our water homes span the fishing villages in Southeast Asia, Peru and Bolivia to modern floating homes in Vancouver and Amsterdam. As our cities grapple with overcrowding and undesirable living situations, the ocean remains a potential frontier for sophisticated water-based communities.
The United Nations has expressed support for further research into floating cities in response to rising sea levels and to house climate refugees. A speculative proposal, Oceanix City, was unveiled in April at the first Round Table on Sustainable Floating Cities at UN headquarters in New York.
The former tourism minister of French Polynesia, Marc Collins Chen, and architecture studio BIG advanced the proposal. Chen is involved with the Seasteading Institute, which is seeking to develop autonomous city-states floating in the shallow waters of “host nations”.
While this latest proposal has gained UN attention, it is an old idea we have repeatedly returned to over the past 70 years with little success. In fact, the Oceanix City proposal has not reached the same level of technical sophistication as previous models.
A brief history of floating cities
The architecture community was fascinated with marine utopias between the 1950s and ’70s. The technological optimism of this period led architects to consider whether we could build settlements in inhospitable places like the polar regions, the deserts and on the sea.
The Japanese Metabolists put forward incredible projects such as Kenzo Tange’s 1960 Tokyo Bay Plan and the marine city proposals of Kikutake and Kurokawa.
These proposals were directed at solving the impending urban crises of overpopulation and pressures on land-based resources. Many were even sophisticated enough to be patented.
The arc of this global architectural discussion was captured during the first UN Habitat conference (“Habitat I”) in Vancouver in 1976. In many ways, the UN has returned to the Vancouver Declaration from Habitat I to “[adopt] bold, meaningful and effective human settlement policies and spatial planning strategies” and to treat “human settlements as an instrument and object of development”.
We are seeing a pivoting that began in 2008 with Vincent Callebaut’s “Lilypad” – a “floating ecopolis for ecological refugees”.
Where floating cities were once dismissed as too far-fetched, the concept has been repackaged and is re-emerging into public consciousness. This time in a more politically viable state – as a means of addressing the climate emergency.
The technology and types of floating city structures
No floating settlements have ever been created on the high seas. Current offshore engineering is concerned with how cities can locate infrastructure, such as airports, nuclear power stations, bridges, oil storage facilities and stadiums, in shallow coastal environments rather than in deep international waters.
Two main types of very large floating structures (VLFS) technology can be used to carry the weight of a floating settlement.
The first, pontoon structures, are flat slabs suitable for floating in sheltered waters close to shore.
The second, semi-submersible structures (such as oil rigs), comprise platforms that are elevated on columns off the water surface. These can be located in deep waters. Potentially, oil rigs could be repurposed for such floating cities in international waters.
Oceanix City is based on the pontoon structure. This would restrict it to shallower waters with breakwaters to limit the impacts of waves. This sort of structure could serve as an extension of a coastal city, as a life raft for island communities inundated by rising waters, or to provide mobile essential services to residents of flood-prone slums.
Sovereign floating cities and micronations
While some early marine utopian proposals were responses to emerging urban issues, many proposals conceptualised “seaborne leisure colonies”. These communities would be independent city-states allowing inhabitants to circumvent tax laws or restrictions on medical research in their own countries.
This sort of floating city was conceived of as a micronation with sovereignty and ability to provide citizenship to its occupants. The example was set by the Principality of Sealand, off the coast of Britain.
None of these proposals have succeeded. Even modern attempts such as the Freedom Ship and the Seasteading Institute’s plans for an autonomous floating settlement under French Polynesian jurisdiction have stalled. A recent attempt at creating a sovereign micronation (seastead) off Thailand led to its proponents becoming fugitives, potentially facing the death penalty.
A viable project?
Technology is not a barrier to floating cities in international waters. Advances in technology enable us to create structures for habitation in deep sea waters. These schemes have never really taken off because of political and commercial barriers.
While this time round proponents are packaging floating cities in a more politically viable concept as a life raft for climate refugees, commercial barriers remain. Apart from the UN, few organisation have the economic and political influence or reason to deliver a satellite floating city in the ocean.
In my view, the future of ocean cities is in technology campuses and in tourism. Given the significant risk of a community in extreme isolation in international waters, the solution to bringing people together in mid-ocean requires us to think about what connects us: technology, work and play. In these three elements we see, perhaps, the two lowest-hanging fruits (or the most buoyant of possibilities) for ocean cities.
The first is in floating tech campuses where large technology companies set up floating data centres and campuses in international waters. Situated outside national jurisdictions, these campuses could circumvent increasingly onerous privacy regimes or offer innovative technological services without having to negotiate regulatory barriers.
The second prospect is a return to the seaborne leisure colonies of the past. Companies like Disney could expand on their cruise offerings to build floating theme parks. These resorts could be sited in international waters or hosted by coastal cities.
Given our fascination with living on water, even if Oceanix City does not suceed, it won’t be long before we see another floating city proposal. And if we get the mix of social, political and commercial drivers right, we might just find ourselves living on one.
Kate Letheren, Queensland University of Technology; Rebekah Russell-Bennett, Queensland University of Technology; Rory Mulcahy, University of the Sunshine Coast, and Ryan McAndrew, Queensland University of Technology
We have access to plenty of technology that can serve us by automating more of our daily lives, doing everything from adjusting the temperature of our homes to (eventually) putting groceries in our fridges.
But do we want these advancements? And – importantly – do we trust them?
Our research, published earlier this year in the European Journal of Marketing, looked at the roles technology plays in Australian homes. We found three main ways people assign control to, and trust in, their technology.
Most people still want some level of control. That’s an important message to developers if they want to keep increasing the uptake of smart home technology, yet to reach 25% penetration in Australia.
How smart do we want a home?
Smart homes are modern homes that have appliances or electronic devices that can be controlled remotely by the owner. Examples include lights controlled via apps, smart locks and security systems, and even smart coffee machines that remember your brew of choice and waking time.
But we still don’t understand consumer interactions with these technologies, and to speed up their adoption we need to know they type of value they can offer.
We conducted a set of studies in conjunction with CitySmart and a group of distributors, and we asked people about their smart technology preferences in the context of electricity management (managing appliances and utility plans).
We conducted 45 household interviews involving 116 people across Queensland, New South Wales, Western Australia, and Tasmania. Then we surveyed 1,345 Australian households. The interviews uncovered and explored the social roles assigned to technologies, while the survey allowed us to collect additional information and find out how the broader Australian population felt about these technology types.
We found households attribute social roles and rules to smart home technologies. This makes sense: the study of anthropomorphism tells us we tend to humanise things we want to understand. We humanise in order to trust (remember Clippy, the Microsoft paperclip with whom we all had a love-hate relationship?).
These social roles and rules determine whether (or how) households will adopt the technologies.
Tech plays three roles
Most people want technology to serve them (95.6% of interviewees, about 19 out of 20). Those who didn’t want any technology were classified as “resisters” and made up less than 5% of the respondents.
We found the role that technology can play in households tended to fall into one of three categories, the intern, the assistant and the manager:
the intern (passive technology)
Technology exists to bring me information, but shouldn’t be making any decisions on its own. Real-life example: Switch your Thinking provides an SMS-based tip service. This mode of use was preferred in 22-35% of households.
the assistant (interactive technology)
Technology should not only bring me information, but add value by helping me make decisions or interact. Real-life example: Homesmart from Ergon provides useful data to support consumers in their decisions; including remotely controlling appliances or monitoring electricity budget. This mode of use was preferred in 41-51% of households.
the manager (proactive technology)
Technology should analyse information and make decisions itself, in order to make my life more efficient. Real-life example: Tibber, which learns your home’s electricity-usage pattern and helps you make adjustments. This mode of use was preferred in 22-24% of households.
Who’s the boss?
According to our study, while smart technology roles can change, the customer always remains the CEO. As CEO, they determine whether full control is retained or delegated to the technology.
For example, while two consumers might install a set of smart lights, one may engage by directly controlling lights via the app, while the other delegates this to the app – allowing it to choose based on sunset times when lights should be on.
Notably, time pressure was evident as justification for each of the three options. Passive technology saved time by not wasting it on fiddling with smart tech. Interactive technology gave information and controlled interactions for busy families. Proactive technology relieved overwhelmed households from managing their own electricity.
All households had clear motivation for their choices.
Households that chose passive technology were motivated by simplicity, cost-effectiveness and privacy concerns. One study participant in this group said:
Less hassle. Don’t like tech controlling my life.
Households prioritising interactive technology were looking for a balance of convenience and control, technology that provides:
A good support but allows me to maintain overall control and decision-making.
Households keen on proactive technology wanted set and forget abilities to allow the household to focus on the more important things in life. They sought:
Having the process looked after and managed for us as we don’t have the time to do it ourselves.
This raises the question: why did we see such differences in household preference?
Trust in tech
According to our research, this comes down to the relationship between trust, risk, and the need for control. It’s just that these motivations that are expressed differently in different households.
While one household sees delegating their choices as a safe bet (that is, trusting the technology to save them from the risk of electricity over-spend), another would see retaining all choices as the true expression of being in control (that is, believing humans should be trusted with decisions, with technology providing input only when asked).
This is not unusual, nor is this the first study to find the importance of our sense of trust and risk in making technology decisions.
It’s not that consumers don’t want advancements to serve them – they do – but this working relationship requires clear roles and ground rules. Only then can there be trust.
For smart home technology developers, the message is clear: households will continue to expect control and customisation features so that the technology serves them – either as an intern, an assistant, or a manager – while they remain the CEO.
If you’re interested to discover your working relationship with technology, complete this three-question online quiz.
Kate Letheren, Postdoctoral Research Fellow, Queensland University of Technology; Rebekah Russell-Bennett, Social Marketing Professor, School of Advertising, Marketing and Public Relations, Queensland University of Technology; Rory Mulcahy, Lecturer of Marketing, University of the Sunshine Coast, and Ryan McAndrew, Social Marketer & Market Researcher, Queensland University of Technology
Despite what every spy movie in the past 30 years would have you think, fingerprint and face scanners used to unlock your smartphone or other devices aren’t nearly as secure as they’re made out to be.
While it’s not great if your password is made public in a data breach, at least you can easily change it. If the scan of your fingerprint or face – known as “biometric template data” – is revealed in the same way, you could be in real trouble. After all, you can’t get a new fingerprint or face.
Your biometric template data are permanently and uniquely linked to you. The exposure of that data to hackers could seriously compromise user privacy and the security of a biometric system.
Current techniques provide effective security from breaches, but advances in artificial intelligence (AI) are rendering these protections obsolete.
How biometric data could be breached
If a hacker wanted to access a system that was protected by a fingerprint or face scanner, there are a number of ways they could do it:
your fingerprint or face scan (template data) stored in the database could be replaced by a hacker to gain unauthorised access to a system
a physical copy or spoof of your fingerprint or face could be created from the stored template data (with play doh, for example) to gain unauthorised access to a system
stolen template data could be reused to gain unauthorised access to a system
stolen template data could be used by a hacker to unlawfully track an individual from one system to another.
Biometric data need urgent protection
Nowadays, biometric systems are increasingly used in our civil, commercial and national defence applications.
Consumer devices equipped with biometric systems are found in everyday electronic devices like smartphones. MasterCard and Visa both offer credit cards with embedded fingerprint scanners. And wearable fitness devices are increasingly using biometrics to unlock smart cars and smart homes.
So how can we protect raw template data? A range of encryption techniques have been proposed. These fall into two categories: cancellable biometrics and biometric cryptosystems.
In cancellable biometrics, complex mathematical functions are used to transform the original template data when your fingerprint or face is being scanned. This transformation is non-reversible, meaning there’s no risk of the transformed template data being turned back into your original fingerprint or face scan.
In a case where the database holding the transformed template data is breached, the stored records can be deleted. Additionally, when you scan your fingerprint or face again, the scan will result in a new unique template even if you use the same finger or face.
In biometric cryptosystems, the original template data are combined with a cryptographic key to generate a “black box”. The cryptographic key is the “secret” and query data are the “key” to unlock the “black box” so that the secret can be retrieved. The cryptographic key is released upon successful authentication.
AI is making security harder
In recent years, new biometric systems that incorporate AI have really come to the forefront of consumer electronics. Think: smart cameras with built-in AI capability to recognise and track specific faces.
But AI is a double-edged sword. While new developments, such as deep artificial neural networks, have enhanced the performance of biometric systems, potential threats could arise from the integration of AI.
For example, researchers at New York University created a tool called DeepMasterPrints. It uses deep learning techniques to generate fake fingerprints that can unlock a large number of mobile devices. It’s similar to the way that a master key can unlock every door.
Researchers have also demonstrated how deep artificial neural networks can be trained so that the original biometric inputs (such as the image of a person’s face) can be obtained from the stored template data.
New data protection techniques are needed
Thwarting these types of threats is one of the most pressing issues facing designers of secure AI-based biometric recognition systems.
Existing encryption techniques designed for non AI-based biometric systems are incompatible with AI-based biometric systems. So new protection techniques are needed.
Academic researchers and biometric scanner manufacturers should work together to secure users’ sensitive biometric template data, thus minimising the risk to users’ privacy and identity.
In academic research, special focus should be put on two most important aspects: recognition accuracy and security. As this research falls within Australia’s science and research priority of cybersecurity, both government and private sectors should provide more resources to the development of this emerging technology.
When it comes to personal cybersecurity, you might think you’re doing alright. Maybe you’ve got multi-factor authentication set up on your phone so that you have to enter a code sent to you by SMS before you can log in to your email or bank account from a new device.
What you might not realise is that new scams have made authentication using a code sent by SMS messages, emails or voice calls less secure than they used to be.
Multi-factor authentication is listed in the Australian Cyber Security Centre’s Essential Eight Maturity Model as a recommended security measure for businesses to reduce their risk of cyber attack.
Last month, in an updated list, authentication via SMS messages, emails or voice calls was downgraded, indicating they’re no longer considered optimal for security.
Here’s what you should do instead.
What is multi-factor authentication?
Whenever we log in to an app or device, we are usually asked for some form of identity check. This is often something we know (like a password), but it can also be something we have (like a security key or an access card) or something we are (like a fingerprint).
The last of these is often preferred because, while you can forget a password or a card, your biometric signature is always with you.
Multi-factor authentication is when more than one identity check is conducted via different channels. For instance, it’s common these days to enter your password, and an extra authentication code you need to enter is sent to your phone via SMS message, email or voice mail.
Lots of services, such as banks, already offer this feature. You’re sent a “one-time” code to your phone in order to confirm authority to enact a transaction.
This is good because:
- it uses two separate channels
- the code is randomly generated, so it can’t be guessed
- the code has a limited lifetime
How could this go wrong?
Suppose a cybercriminal has stolen your phone, but you have it locked via fingerprint. If the criminal wants to compromise your bank account and attempts to log in, your bank sends an authentication code to your phone.
Depending on how your phone settings are configured, the code could pop-up on your phone screen, even when it’s still locked. The criminal could then input the code and access your bank account. Note that “do not disturb” settings on your phone won’t help as the message still appears, albeit quietly. In order to avoid this problem, you need to disable message previews entirely in your phone’s settings.
A more elaborate hack involves “SIM swapping”. If a criminal has some of your identity details, they might be able to convince your phone provider that they are you and request a new SIM attached to your phone number to be sent to them. That way, any time an authentication code is sent from one of your accounts, it will go to the hacker instead of you.
This happened to a technology journalist in the US a couple of years ago, who described the experience:
At about 9pm on Tuesday, August 22 a hacker swapped his or her own SIM card with mine, presumably by calling T-Mobile. This, in turn, shut off network services to my phone and, moments later, allowed the hacker to change most of my Gmail passwords, my Facebook password, and text on my behalf. All of the two-factor notifications went, by default, to my phone number so I received none of them and in about two minutes I was locked out of my digital life.
Then there is the question of whether you want to provide your phone number to the service you are using. Facebook has come under fire in recent days for requiring users to provide their phone number to secure their accounts, but then allowing others to search for their profile via their phone number. They have also reportedly used phone numbers to target users with ads.
This is not to say that splitting identity checks is a bad thing, it’s just that sending part of an identity check via a less-secure channel promotes a false sense of security that could be worse than using no security at all.
Multi-factor authentication is important – as long as you do it via the right channels.
Which authentication combinations are best?
Let’s consider some combinations of multi-factor authentication that have varying degrees of ease of use and security.
An obvious first choice is something you know and something you have, say a password and a physical access card. A cybercriminal has to obtain both to impersonate you. Not impossible, but difficult.
Another combination is a password and a voiceprint. A voiceprint recognition system records you speaking a particular passphrase and then matches your voice when you need to authenticate your identity. This is attractive because you can’t leave your voice at home or in the car.
But could your voice be forged? With the aid of digital software, it might be possible to take an existing recording of your voice, unpack and re-sequence it to produce the required phrase. This is somewhat challenging, but not impossible.
A third combination is a card and a voiceprint. This choice removes the need to remember a password, which could be stolen, and as long as you keep the physical token (the card or key) safe, it is very hard for someone else to impersonate you.
There are no perfect solutions yet and using the most secure version of authentication depends on it being offered by the service you are using, such as your bank.
Cyber security is about managing risk, so which combination of multi-factor authentication suits your needs depends on the balance you accept between usability and security.
Until recently, New Zealand’s relationship with China has been easy and at little cost to Wellington. But those days are probably over. New Zealand’s decision to block Huawei from its 5G cellular networks due to security concerns is the first in what could be many hard choices New Zealand will need to make that challenge Wellington’s relationship with Beijing.
For over a decade New Zealand has reaped the benefits of a free-trade agreement with China and seen a boom of Chinese tourists. China is New Zealand’s largest export destination and, apart from concerns about the influence of Chinese capital on the housing market, there have been few negatives for New Zealand.
Long-held fears that New Zealand would eventually have to “choose” between Chinese economic opportunities and American military security had not eventuated.
China’s growing might
During Labour’s government under Helen Clark (1999-2008) and under the National government with John Key as prime minister (2008-2016), New Zealand could be all things to all people, building closer relationships with China while finally calming the last of the lingering American resentment over New Zealand’s anti-nuclear policies. But now, there are difficult decisions to be made.
As China becomes more assertive on the world stage, it is becoming increasingly difficult for New Zealand to keep up this balancing act. Two forces are pushing a more demanding line from Beijing. One is China’s move to assert more control over waters well off its coast.
For decades, Beijing was happy to let the US Navy maintain order over the Western Pacific to facilitate global trade with China. As China’s own economic and military abilities have grown, it has begun to show that it is willing to protect what it sees as its own patch. Its mammoth island building in the South China Sea is a testament to its new-found desire to push its territorial claims after decades of patience.
China’s stronger foreign policy is testing what is known as the “rules-based order”, essentially a set of agreed rules that facilitate diplomacy, global trade, and resolve disputes between nations. This is very concerning for New Zealand as it needs stable rules to allow it to trade with the world. New Zealand doesn’t have the size to bully other countries into getting what we want.
Trump-style posturing would get New Zealand nowhere. A more powerful China doesn’t need to threaten the rules-based system, but the transition could create uncertainty for business and higher risks of trade disruption. It is vital for New Zealand that an Asia-Pacific dominated by China is as orderly as one dominated by the US.
Tech made in China
The other force challenging the relationship is China’s emergence as a source of technology rather than simply a manufacturer of other countries’ goods. Many Chinese firms like Huawei are now direct competitors of Western tech companies. Huawei’s success makes it strategically important for Beijing and a point of pride for ordinary Chinese citizens.
Yet, unlike Western countries, China actively monitors its population through a wide variety of mass surveillance technology. Therefore, there is a trust problem when Chinese firms claim that their devices are secure from Beijing’s spies. New Zealand’s decision to effectively ban Huawei components from 5G cellular networks could be the first in many decisions needed to ensure national security.
Chinese designed goods are becoming more common and issues around privacy and national security will get stronger as everyday household goods become connected to the internet. Restrictions on Chinese-made goods will further frustrate Beijing and will invite greater retaliation to New Zealand exporters and tourist operators.
In more extreme cases, foreign nationals have been detained in China in response to overseas arrests of prominent Chinese individuals. As many as 13 Canadians were detained recently in China following the arrest of Huawei’s CFO Meng Wanzhou in Vancouver at the request of US prosecutors.
Declaring the limits of the relationship
If New Zealand is to maintain a healthy relationship with China, it needs to be clear on what it is not willing to accept. It is easy to say individual privacy, national security and freedom of speech are vital interests of New Zealand, but Wellington needs to be clear to its citizens and to China what exactly those concepts mean in detail. All relationships require compromise, so Wellington needs to be direct about what it won’t compromise.
New Zealand spent decades during the Cold War debating how much public criticism of the US the government could allow itself before it risked its alliance with the Americans. New Zealanders wondered if they really had an independent foreign policy if they couldn’t stand up to their friends. Eventually nationalist sentiment spilled over in the form of the anti-nuclear policy.
New Zealand is now heading for the same debate as Kiwis worry about how much they can push back against Beijing’s interests before it starts to hurt the economy. Now that the relationship with China is beginning to have significant costs as well as benefits, it’s probably time New Zealanders figured out how much they are prepared to pay for an easy trading relationship with China.
Imagine you’ve recently had a heart attack.
You’re a lucky survivor. You’ve received high-quality care from nurses and doctors whilst in hospital and you’re now preparing to go home with the support of your family.
The doctors have made it clear that the situation is grim. It’s a case of: change your lifestyle or die. You’ve got to stop smoking, increase your physical activity, eat a healthy balanced diet (whilst reducing your salt), and make sure you take all your medicine as prescribed.
But before you leave the hospital, the cardiology nurse wants to talk to you. There are a few apps you can download on your smartphone that will help you manage your recovery, including the transition from hospital to home and all the health-related behavioural changes necessary to reduce the risk of another heart attack.
Rapid advancements in digital technologies are revolutionising healthcare. The benefits are numerous, but the rate of development is difficult to keep up with. And that’s creating challenges for both healthcare professionals and patients.
What are digital therapeutics?
Digital therapeutics can be defined as any intervention that is digitally delivered and has a therapeutic effect on a patient. They can be used to treat medical conditions in a similar way to drugs or surgery.
Current examples of digital therapeutics include apps for managing medications and cardiovascular health, apps to support mental health and well being, or augmented and virtual reality tools for patient education.
Paper-based letters, health records, prescription charts and education pamphlets are outdated. We can now send emails, enter information into electronic databases and access electronic medication charts.
And patient education is no longer a static, one-way communication. The digital revolution facilitates dynamic and personalised education, and a two-way interaction between patient and therapist.
How do digital therapeutics help?
Digital health care improves overall quality of care, even in cases where a patient lives hundreds of kilometres away from their doctor.
Take diabetes for example. This condition affects 1.7 million Australians. It’s a major risk factor for developing cardiovascular disease and stroke. So it’s important that people with diabetes manage their condition to reduce their risk.
A recent study evaluated a team-based online game, which was delivered by an app to provide diabetes self-management education. The participants who received the app in this trial had meaningful and sustained improvements in their diabetes, as measured by their HbA1c (blood glucose levels).
App based games of this kind hold promise to improve chronic disease outcomes at scale.
New electronic devices are also being used by people of all ages to track activity, measure sleep and record nutrition. This information provides instant and accurate feedback to individuals and their therapists, allowing for adjustments where necessary. The logged information can also be combined into large data sets to reveal patterns over time and inform future treatments.
Digital therapeutics are spawning a new language within the healthcare industry. “Connected health” reflects the increasingly digital ways clinicians and patients communicate. A few examples include text messaging, telehealth, and video consultations with health professionals.
There is increasing evidence that digitally delivered care (including apps and text message based interventions) can be good for your health and can help you manage chronic conditions, such as diabetes and cardiovascular disease.
But not all health apps are the same
Whilst the digital health revolution is exciting, results of research studies should be carefully interpreted by patients and providers.
Innovation has led to 325,000 mobile health apps available in 2017. This raises significant governance issues relating to patient safety (including data protection) when using digital therapeutics.
A recent review identified that most studies have a relatively short duration of intervention and only reflect short-term follow up with participants. The long-term effect of these new therapeutic interventions remains largely unknown.
The current speed of technological development means the usual safety mechanisms face new ethical and regulatory challenges. Who is doing the prescribing? Who is responsible for the efficacy, storage and accuracy of data? How are these technologies being integrated into existing care systems?
Digital health needs a collaborative approach
Digital health presents seismic disruption to patient care, particularly when new technologies are cheap and readily accessible to patients who might lack the insight required to recognise normality or cause for alarm. Technology can be enabling and empowering for self management, however there’s a lot more needs to be done to link these new technologies into the current health system.
Take the new Apple Watch functionality of heart rate notifications for example. Research like the Apple Heart Study suggests this exciting innovation could lead to significantly improved detection rates of heart rhythm disorders, and enhanced stroke prevention efforts.
But when a patient receives a high heart rate notification, what should they do? Ignore it? Go to a GP? Head straight to the emergency department? And, what is the flow on impact on the health system?
Many of these questions remain unanswered suggesting there is an urgent need for research that examines how technology is implemented into existing healthcare systems.
If we are to produce useful digital therapeutics for real-world problems, then it is critical that the end-users are engaged in the process. Patients and healthcare professionals will need to work with software developers to design applications that meet the complex healthcare needs of patients.
Caleb Ferguson, Senior Research Fellow, Western Sydney University; Debra Jackson, Professor, University of Technology Sydney, and Louise Hickman, Associate Professor of Nursing, University of Technology Sydney
We live in a world of wireless communications, from the early days of radio to new digital television, Wi-Fi and the latest 4G (soon to be 5G) connected smart devices.
But there are limits to this wireless world. With the prediction of 12 billion mobile-connected devices by 2021 and a projected sevenfold increase in wireless traffic, the search is on for any new method of wireless connectivity.
One solution could be right before our very eyes, if only we could see it.
Current wireless connections
All wireless applications – such as mobile communications, Wi-Fi, broadcasting, and sensing – rely on some form of electromagnetic radiation.
The difference between these applications is simply the frequency of the signal (the carrier frequency) used in the electromagnetic radiation.
For example, current mobile phones sold as 3G and 4G operate in the lower microwave frequency bands (850MHz, 1.8GHz, 2-2.5GHz). A wireless local area network such as Wi-Fi operates in the 2.4GHz and 5GHz bands, whereas digital terrestrial television operates at 600-620MHz.
The spectrum of electromagnetic radiation covers a very broad range of frequencies and some of these are selected for specific applications.
These frequency regions are highly contested and valuable resources for wireless applications.
Running out of spectrum
Our current spectrum use in the lower microwave region will soon be heavily congested, even exhausted. It would be difficult to squeeze any more spare spectrum for any wireless application.
To carry an information content on to one of these frequencies, the frequency bands need sufficient bandwidth – the amount of information that can be transmitted – to meet future requirements. At the lower end of the spectrum, there are insufficient bandwidths to meet speeds exceeding gigabits per second.
At the higher end of the spectrum, ionising radiation such as x-rays and gamma rays cannot be used because of safety issues.
Despite current 4G wireless standard promising more shared capacity (1Gb/s), the projected demand and traffic volume already pushes the existing infrastructure to its ultimate limit. The future promise of 5G communication only adds to the problem.
A major rethink of the current wireless technologies is needed to meet these challenging requirements.
Let there be light!
The wireless transmission of optical signals has emerged as a viable option. It offers advantages not possible with current wireless technologies.
Optical wireless promises greater speed, higher throughput, and potentially lower energy consumption. Leveraging on existing optical wired infrastructures (namely optical fibre cables and networks), optical wireless connectivity can provide a seamless high capacity to end-users.
An example would be using optical wireless connectivity inside buildings to complement fibre-to-the-home deployments.
Optical wireless networks would be immune to electromagnetic interference and so could be deployed in radio frequency (RF) sensitive environments. You’ve probably seen those warning signs asking you not to use your mobile phone in hospitals, aircraft and other areas where equipment is sensitive to interference.
Optical wireless communications can be divided into visible light and infrared systems.
And let there be sight
A common issue with both is that devices need to be in the line of sight, as any physical obstruction can result in the loss of transmission. You may have experienced this issue when attempting to change a channel on TV if someone or something gets in the way of your remote.
Visible light communication (VLC) relies on LEDs that are also used for lighting. For example, by flashing LED lights located in the ceiling of a room at a rate much higher than can be discerned by the human eye, information can be conveyed to detectors around the room.
The major limitation of VLC is the limited bandwidth of commercially available white LED (~100 of MHz) that limits the transmission speeds.
Infrared communication systems have ample bandwidth with the potential of transmission tens of Gb/s per user. Despite the major advantage over VLC, the need for line-of-sight has seen this technology under-developed. Until now.
To overcome this we have demonstrated an infrared-based optical wireless communication link that can support a user on the move. By using a pair of access points with some spatial separation, any blockage of beams can be easily overcome as users hop from beam to beam freely.
Optical wireless systems can be built to make sure there is a secure wireless transmission. Using efficient wireless protocols it’s possible to transmit data without any delay and to allow users to move within a building while enjoying high speed wireless coverage.
Optical wireless in action
We will in future be using a range of devices, such as virtual reality (VR) and augmented reality (AR) devices, that all require superfast wireless connections.
For example, these new user interfaces are poised to make a big difference to the way museums and galleries will operate in the future. Currently, most of these platforms are linked via wired connections. But wireless interfaces will make them more easy to be used in applications.
The uptake of optical wireless as a viable communications technology can also drive further possibilities of using low-cost optical wireless transceivers to substitute expensive optical fibre rollout in rural and regional broadband contexts.
The integrated transceivers for infrared optical wireless communications are still under development and more effort is needed to speed up such integration efforts. But the researcher teams here and abroad are trying to make advances in the way such systems can be used in realistic scenarios.
Thas Ampalavanapillai Nirmalathas, Director – Networked Society Institute and Professor of Electrical and Electronic Engineering, University of Melbourne; Christina Lim, Professor, University of Melbourne, and Elaine Wong, Associate Dean, Diversity and Inclusion, University of Melbourne
Big tech is under fire in Europe. In its latest sting, the European Commission has slapped Google with an eye-watering €4.3 billion (AU$6.8 billion) fine for anti-competitive tying of its Android operating system to its in-house search engine and web browser.
The decision follows the Commission’s €2.4 billion (AU$3.5 billion) fine against the company for giving illegal advantage to its comparison shopping service, just over a year ago.
And the search company is not alone in feeling the heat from Brussels. Apple, Amazon, Facebook and Microsoft have all been on the receiving end of what some see as a “techlash” reflecting anti-US bias and protectionism.
So far, US competition authorities have taken a far more restrained approach. The Federal Trade Commission looked into various Google practices in 2012 and found it had no case to raise around search bias. Antitrust officials are instead encouraging vigilance but caution when it comes to intervening in data-driven markets characterised by high rates of innovation.
Closer to home, the Australian Competition and Consumer Commission (ACCC) is conducting an inquiry into the impact of digital platforms on media and advertising markets. It is attracting intense interest, not just here but abroad. There are also reports of the ACCC separately investigating Google’s data-harvesting practices.
Will the ACCC follow the US or European approach?
It is tempting to speculate about the outcomes of the inquiry in those terms, but to do so would be a mistake.
There are differences in the substantive laws across jurisdictions. The Australian rule on misuse of market power, for example, is not an exact replica of either its US or EU counterpart.
What’s more, the law in any country can only be understood by considering its ideological roots, the political and socio-cultural conditions in which it is shaped, and the institutional framework that determines its application. In other words, history and context matter.
Divergence between the US and European Union on how to deal with large powerful companies is nothing new and, in context, not all that surprising.
In the US competition is about consumer welfare
US antitrust laws were introduced in response to the economic and political power of “big business” and what was seen as a need to protect the “little guy” from a few “robber barons”.
However, since the 1970s, under the influence of the Chicago school, commitment to economic efficiency in the interests of consumer welfare has become the singular goal of antitrust.
In practice, this has meant agencies and courts have stressed a ground rule of trusting markets to self-correct and erring on the side of false negatives rather than false positives. Where there is intervention, it is to protect competition and not competitors.
In recent times, there has been growing dissatisfaction in some circles about the levels of concentration in the US economy and the role that permissive antitrust has played in creating so-called “data-opolies”.
Nevertheless, Chicagoan themes continue to underpin self-restraint on the part of US antitrust agencies, including when it comes to big tech.
In Europe fairness counts too
While EU competition laws are also concerned about the consumer, they are more pluralistic in their approach. This reflects their experience in the aftermath of the second world war, and the single market project.
EU-style antitrust has therefore always been based on – and continues to reflect – more normative values than the US, protecting ideas like economic freedom and fairness.
Fairness in this context, however, is not necessarily about protecting the losers from a legitimate competitive process. It is about protecting the right to equal opportunities for efficient competitors, or merit-based competition on a level playing field.
It is also about ensuring fairness for consumers. Anti-competitive conduct, the European competition boss argues, is unfair because it deprives consumers of the power to arbitrate the marketplace.
Australian competition law has its own flavour
Born in the late 1970s, the modern version of Australian competition law has followed the Chicagoan song sheet, favouring economic efficiency for consumer welfare as its primary purpose.
However, in a relatively small economy, marked by oligopolistic structures and high concentration in key sectors, Australia has always struggled with a balancing act between promoting efficiency and protecting small business.
“Fair competition” (a version of the iconic “fair go”) is a phrase often heard in Australian competition law dialect. But it is not to be confused with propping up inefficient rivals at the expense of competition.
Unlike in many other countries, Australia’s competition rules live within a statute that also has rules to deal separately with ensuring small businesses and consumers are treated fairly.
Under the Competition and Consumer Act, the competition, fair trading and consumer protection provisions are mutually reinforcing. Also unlike in either the US or EU, these provisions are enforced by a single agency, the ACCC.
Distinctive too is that the ACCC is an agency with substantial regulatory responsibilities in areas including communications and infrastructure. These may be relevant in a debate about whether powerful tech companies should be regulated like public utilities on the grounds that they provide services that are essential to consumers.
The ACCC’s inquiry will be holistic
Given this legislative and institutional framework, the ACCC’s take on big tech is likely to be a mix of US and EU approaches with more than a dash of homemade seasoning.
It will consider if platforms have market power, in which markets, and how such power is being exercised. Implications for the price, quality and choice of news for consumers will loom large.
It will consider what impact the proposed consumer right to data may have.
It will also examine whether platforms are providing users with adequate levels of privacy and data protection.
The ACCC will look into whether the large platforms are behaving in a way that dampens innovation and investment incentives for start ups and smaller players.
There will be consideration of whether platforms have an unfair advantage because the regulatory playing field is not even. Regulation of journalistic content and copyright will also come into play.
Most importantly, the inquiry will not be static in its focus. It will have a firm eye on potential long term trends in and impacts of technological change within an Australian context.
Within a broad holistic framework, the ACCC will examine these questions in an integrated way. And will take its time to “get the answers right”.
A version of this article appears on Pursuit. Professor Caron Beaton-Wells will launch a podcast on 26 July about Competition Lore, focusing on the challenges of competition in a digital age. Listen to ACCC Chairman Rod Sims discuss the Digital Platforms Inquiry in episode three of the podcast.