Australia must engage with nuclear research or fall far behind



Nuclear power will likely remain part of the global energy mix.
ioshimuro/Flickr, CC BY-NC

Heiko Timmers, UNSW

Much is made of the “next generation” of nuclear reactors in the debate over nuclear power in Australia. They are touted as safer than older reactors, and suitable for helping Australia move away from fossil fuels.

But much of the evidence given in September to a federal inquiry shows the economics of nuclear in Australia cannot presently compete with booming renewable electricity generation.




Read more:
Nuclear becomes latest round in energy wars


However, international projections predict nuclear power will stick around beyond 2040. It is forecast to reduce the carbon footprint of other nations, in many cases fuelled by our uranium.

To choose wisely on nuclear power options in future, we ought to stay engaged. Renewables in combination with hydro storage might fail to fully decarbonise the electricity sector, or much more electricity may be needed in future for desalination, emission-free manufacturing, or hydrogen fuel to deal with an escalating climate crisis. Nuclear power might be advantageous then.

What reactors will be available in future?

All recent commissions of nuclear power stations, such as the Korean APR-1400 reactors in the United Arab Emirates, or the Chinese Hualong One design, are large Generation III type light water reactors that produce gigawatts of electricity. Discouraged by investment blowouts and considerable delays in England and Finland, Australia is not likely to consider building Generation III reactors.

The company NuScale in particular promotes a new approach to nuclear power, based on smaller modular reactors that might eventually be prefabricated and shipped to site. Although promoted as “next generation”, this technology has been used in maritime applications for many years. It might be a good choice for Australian submarines.




Read more:
Is nuclear power zero-emission? No, but it isn’t high-emission either


NuScale has licensed its design in the United States and might be able to demonstrate the first such reactor in 2027 in a research laboratory in Idaho.

These small reactors each produce 60 megawatts of power and require a much smaller initial investment than traditional nuclear power stations. They are also safer, as the entire reactor vessel sits in a large pool of water, so no active cooling is needed once the reactor is switched off.

However the technical, operational and economic feasibility of making and maintaining modular reactors is completely untested.

Looking ahead: Generation IV reactors and thorium

If Australia decided to build a nuclear power station, it would take decades to complete. So we might also choose one of several other new reactor concepts, labelled Generation IV. Some of those designs are expected to become technology-ready after 2030.

Generation IV reactors can be divided into thermal reactors and fast breeders.

Thermal reactors

Thermal reactors are quite similar to conventional Generation III light water reactors.

However, some will use molten salts or helium gas as coolant instead of water, which makes makes hydrogen explosions – as occurred at Fukushima – impossible.

Some of these new reactor designs can operate at higher temperatures and over a larger temperature range without having to sustain the drastic pressures necessary in conventional designs. This improves effectiveness and safety.

Fast breeders

Fast breeder reactors require fuel that contains more fissile uranium, and they can also create plutonium. This plutonium might eventually support a sustainable nuclear fuel cycle. They also use the uranium fuel more efficiently, and generate less radioactive waste.

However, the enriched fuel and capacity to produce plutonium means that fast breeders are more closely linked to nuclear weapons. Fast reactors thus do not fit well with Australia’s international and strategic outlook.

Breeding fuel from thorium

An alternative to using conventional uranium fuel is thorium, which is far less useful for nuclear weapons. Thorium can be converted in a nuclear reactor to a different type of uranium fuel (U-233).

The idea of using this for nuclear power was raised as early as 1950, but development in the US largely ceased in the 1970s. Breeding fuel from thorium could in principle be sustained for thousands of years. Plenty of thorium is already available in mining tailings.

Thorium reactors have not been pursued because the conventional uranium fuel cycle is so well established. The separation of U-233 from the thorium has therefore not been demonstrated in a commercial setting.

India is working on establishing a thorium fuel cycle due to its lack of domestic uranium deposits, and China is developing a thorium research reactor.

Australia’s perspective

To choose wisely on nuclear power and the right technology in future, we can stay engaged by:

  • realising a much-needed national facility to store waste from our nuclear medicine
  • making our uranium exports competitive again
  • driving the navy’s submarines with nuclear power, and
  • possibly reconsidering the business case for a commercial spent fuel repository.

Australia has already joined the international Generation IV nuclear forum, a good first step to foster cooperation on nuclear technology research and stay in touch with reactor developments.




Read more:
Australia should explore nuclear waste before we try domestic nuclear power


Australia could deepen such research involvement by, for example, developing engineering expertise on thermal Generation IV reactors here. Such forward-looking engagement with nuclear power might pave a structured way for the commercial use of nuclear power later, if it is indeed needed.The Conversation

Heiko Timmers, Associate Professor of Physics, School of Science, UNSW Canberra, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The ACCC is suing Google over tracking users. Here’s why it matters



The ACCC has been highly critical of how many large digital platforms use data.
Shutterstock

Katharine Kemp, UNSW

The Australian Competition and Consumer Commission (ACCC) today announced it is suing Google for misleading consumers about its collection and use of personal location data.

The case is the consumer watchdog’s first move against a major digital platform following the publication of the Digital Platforms Inquiry Final Report in July.

The ACCC follows regulators in countries including the US and Germany in taking action against the way “tech giants” such as Google and Facebook harvest and exploit their users’ data.

What did Google do?

ACCC Chair Rod Sims said Google “collected, kept and used highly sensitive and valuable personal information about consumers’ location without them making an informed choice”.

The ACCC alleges that Google breached the Australian Consumer Law (ACL) by misleading its users in the course of 2017 and 2018, including by:

  • not properly disclosing that two different settings needed to be switched off if consumers did not want Google to collect, keep and use their location data

  • not disclosing on those pages that personal location data could be used for a number of purposes unrelated to the consumer’s use of Google services.

Some of the alleged breaches can carry penalties of up to A$10 million or 10% of annual turnover.

A spokesperson for Google is reported to have said the company is reviewing the allegations and engaging with the ACCC.

The two separate settings that users needed to change to disable location tracking.
Android screenshots, Author provided

Turning off “Location History” did not turn off location history

According to the ACCC, Google’s account settings on Android phones and tablets would have led consumers to think changing a setting on the “Location History” page would stop Google from collecting, keeping and using their location data.

The ACCC says Google failed to make clear to consumers that they would actually need to change their choices on a separate setting titled “Web & App Activity” to prevent this location tracking.

Location data is used for much more than Google Maps

Google collects and uses consumers’ personal location data for purposes other than providing Google services to consumers. For example, Google uses location data to work out demographic information, target advertising, and offer advertising services to other businesses.

Digital platforms increasingly track consumers online and offline to create highly detailed personal profiles on each of us. These profiles are then used to sell advertising services. These data practices create risks of criminal data breaches, discrimination, exclusion and manipulation.




Read more:
Here’s how tech giants profit from invading our privacy, and how we can start taking it back


Concealed data practices under fire around the world

The ACCC joins a number of other regulators and consumer organisations taking aim at the concealed data practices of the “tech giants”.

This year, the Norwegian Consumer Council published a report – Deceived by Design – which analysed a sample of Google, Facebook and Microsoft Windows privacy settings. The conclusion: “service providers employ numerous tactics in order to nudge or push consumers toward sharing as much data as possible”.

The report said some aspects of privacy policies can be seen as “dark patterns”, or “features of interface design crafted to trick users into doing things that they might not want to do”.

In Canada, an investigation into how Facebook gets consent for certain data practices by the Office of the Privacy Commissioner of Canada was highly critical.

It found that the relevant data use policy “contained blanket statements referencing potential disclosures of a broad range of personal information, to a broad range of individuals or organisations, for a broad range of purposes”. The result was that Facebook users “had no way of truly knowing what personal information would be disclosed to which app and for what purposes”.

Is Facebook next?

The ACCC was highly critical of the data practices of a number of large digital platforms when the Final Report of the Digital Platforms Inquiry was published in July this year. The platforms included Facebook, WhatsApp, Twitter and Google.

The report was particularly scathing about privacy policies which were long, complex, difficult to navigate and low on real choices for consumers. In its words, certain common features of digital platforms’ consent processes:

leverage digital platforms’ bargaining power and deepen information asymmetries, preventing consumers from providing meaningful consents to digital platforms’ collection, use and disclosure of their user data.

The report also stated the ACCC was investigating whether various representations by Google and Facebook respectively would “raise issues under the ACL”.

The investigations concerning Facebook related to representations concerning its sharing of user data with third parties and potential unfair contract terms. So far no proceedings against Facebook have been announced.




Read more:
94% of Australians do not read all privacy policies that apply to them – and that’s rational behaviour


Will this change anything?

While penalties of up to A$10 million or 10% of annual turnover (in Australia) may sound significant, last year Google made US$116 billion in advertising revenue globally.

In July, the US Federal Trade Commission settled with Facebook on a US$5 billion fine for repeatedly misleading users about the fact personal information could be accessed by third-party apps without the user’s consent, if a user’s Facebook “friend” gave consent. Facebook’s share price went up after the FTC approved the settlement.

But this does not mean the ACCC’s proceedings against Google are a pointless exercise. Aside from the impact on Google’s reputation, these proceedings may highlight for consumers the difference between platforms which have incentives to hide data practices from consumers and other platforms – like the search engine DuckDuckGo – which offer privacy-respecting alternatives.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Telstra’s new high-tech payphones are meeting resistance from councils, but why?



Telstra’s new digital advertising payphones can be found at Melbourne’s Bourke Street Mall. In this photo, the older centre booth sits between two of Telstra’s larger high-tech booths.
City of Melbourne

Mark A Gregory, RMIT University

Australia is witnessing the first major redesign of the payphone booth since 1983. But Telstra’s new vision is meeting resistance from some councils, and the matter is in the courts.

In an effort to make payphones relevant to the needs of modern Australians, Telstra’s revamped payphones feature mobile charging, Wi-Fi access through Telstra Air (free or via a Telstra broadband plan, depending on the area), and large digital advertising displays.

Sydney’s Lord Mayor Clover Moore described the new booths as “a craven attempt” to profit from “already crowded CBD footpaths”, and a “Trojan horse for advertising”.




Read more:
Will Australia’s digital divide – fast for the city, slow in the country – ever be bridged?


Under existing Universal Service Obligation (USO) agreements, Telstra has to provide payphones as part of its standard telephone service. The USO is a consumer protection measure that ensures everyone has access to landline telephones and payphones, regardless of where they live or work. Telstra is the sole provider of USO services in Australia.

The USO is funded through an industry levy administered by the Australian Communications and Media Authority. This means registered carriers with revenues over A$25 million per year contribute to the levy, including Telstra.

The face of the new Aussie payphone

In a blog post last March, a Telstra employee said the new “smart payphones” provided emergency alerts, multilingual services, and content services including public transport information, city maps, weather, tourist advice, and information on cultural attractions.

The booths are 2.64m tall, 1.09m wide, and are fitted with 75-inch LCD screens on one side. In 2016, 40 payphones were approved by City of Melbourne planners and installed over the following year, marking the start of Telstra’s plans for a nationwide rollout.

Telstra’s submission to the city claimed the booths were “low-impact” infrastructure and therefore planning approval was not required, in accordance with the Telecommunications Act 1997 (Cth).

In 2017, Telstra and outdoor advertising company JC Decaux announced a partnership to “bring the phone box into the 21st century”.




Read more:
Better public Wi-Fi in Australia? Let’s send a signal


It would initially have 1,860 payphones upgraded in Sydney, Melbourne, Brisbane, Adelaide and Perth. These five cities represent 64% of the country’s population and 77% of advertising spend.

Taking matters to court

Earlier this year, Telstra’s application for 81 new booths was blocked by the City of Melbourne, and the city commenced proceedings in the Victorian Civil and Administrative Tribunal to have the booths redefined as not being low-impact.

Given the council allowed 40 booths to be installed in 2017, it’s unclear why its position has since changed.

In May, Telstra hit back by starting federal court proceedings against the council in an effort to overturn prior proceedings. In June, the Brisbane and Sydney city councils joined the City of Melbourne as co-respondents.

Melbourne Councillor and Chair of Planning Nicholas Reece said the new payphones would create congestion on busy footpaths, describing them as “monstrous electric billboards masquerading as payphones”.

He said the booths were “part of a revenue strategy for Telstra”.

But Telstra claims the new payphones are only 15cm wider than previous ones. A company spokesperson said the extra size was necessary to accommodate fibre connections and other equipment needed to operate the booth’s services.

Who pays for, and profits from, payphones?

In 2017, a Productivity Commission inquiry into the USO reported an average annual subsidy of A$2,600-50,000 per payphone, funded through the industry levy.

But the levy doesn’t cover the cost of installing and providing advertising on booths. Also, Telstra’s advertising-generated revenue doesn’t directly offset the cost of installing and operating the payphones.

Telstra has advertised on its payphones for the past 30 years. But display screens for advertising on new booths are 60% larger than previous ones.

The City of Melbourne is concerned because commissioned research by SGS Economics and Planning estimates a 10% reduction in pedestrian flow because of the new booths. This would happen as a result of people getting distracted by the payphone advertising, and would cost the city A$2.1 billion in lost productivity.




Read more:
Optus’ apology on coverage highlights multi-network problem


That said, federal legislation doesn’t prevent Telstra from placing advertising on payphones. So the existing court case could hinge on Melbourne city council’s argument that by increasing the size of digital displays, Telstra’s new payphones are no longer low-impact.

The outcome should be known early next year.

Do we still need payphones?

At a time when consumers and businesses use about 24.3 million mobile handsets, it’s reasonable to question whether payphones are still required.

The number of payphones in operation today is sharply down compared with the payphone’s heyday in the early 1990s, when more than 80,000 could be found across Australia.

But there’s strong evidence they continue to supply a vital public service.

Telstra’s payphones operate in many small regional communities such as Woomera, South Australia. It has a population of less than 200 people.
georgiesharp/flickr

Currently, Telstra provides more than 16,000 public payphones. Last year, these were used to make about 13 million phone calls, of which about 200,000 were emergency calls to 000.

So regardless of the verdict on the Telstra case, the public payphone is and will continue to be an iconic and integral part of our telecommunications landscape.The Conversation

Mark A Gregory, Associate professor, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Nuclear powers once shared their technology openly – how Iran’s programme fell on the wrong side of history



Iran’s nuclear deal is hanging in the balance.
By Stuart Miles/Shutterstock

Joseph O’Mahoney, University of Reading

As tensions remain at fever pitch between Tehran and Washington, Iran continues to breach limits agreed in the 2015 Iran deal, known as the Joint Comprehensive Plan of Action (JCPOA). Since Donald Trump withdrew the US from the deal in May 2018, its future has become more and more uncertain.

Iran has enriched uranium beyond the level agreed in the deal. A UN report published in late September confirmed reports that the Iranian nuclear agency had begun operating more advanced centrifuges, the machines used to enrich uranium.

The Western media report these events by relating them to the materials needed for a nuclear weapon. Yet Iran insists that it “will never pursue a nuclear weapon” and that all of its activities are necessary for civilian nuclear power.

The difference between peaceful civilian nuclear energy programmes and the military production of nuclear weapons seems an obvious distinction. And yet, there have been major shifts in the policy regarding this line since explosive nuclear fission was first achieved in the US in 1945. These are partly rooted in a simple and relatively uncontested principle, that there is there is no strict technical difference between the fissile material used in a civilian energy nuclear reactor and that used in a nuclear weapon.

The same technology is used to enrich uranium for either nuclear power or nuclear weapons. This principle has been implemented in a variety of ways throughout the nuclear age.

Post-war restrictions

After World War II, the US initially restricted access to “all data concerning the manufacture or use of atomic weapons, the production of fissionable material, or the use of fissionable material in the production of power” in the Atomic Energy Act of 1946.

This meant strict restrictions were put in place on exchanging information on nuclear technology even between the US and the UK, otherwise close allies. The policy was based on that principle that there was little practical difference between the knowledge necessary to build nuclear reactors and that needed to produce “atomic” weapons, such as those the Americans dropped on Hiroshima and Nagasaki in 1945.

US President Harry Truman strongly believed that by restricting access to its information about all nuclear technology, the US could maintain a technical barrier to the production of nuclear weapons elsewhere in the world. He was so convinced of this, that he defied the scientific consensus that the Soviet Union tested a bomb in 1949, saying:

I am not convinced the Russians have achieved the know-how to put the complicated mechanism together to make an A-Bomb work, I am not convinced they have the bomb.

However, the Soviet Union had indeed tested a device, albeit with the help of some technical espionage. Then, despite the restrictions on information sharing, the UK tested a weapon in 1952. By the end of 1953, the Soviet Union had followed the US in successfully exploding a thermonuclear weapon.

Atoms for Peace

The initial hopes for a US technical monopoly were dashed. Reversing previous policy, US President Dwight Eisenhower told the UN in 1953 that “the dread secret and the fearful engines of atomic might are not ours alone”. As part of the Atoms for Peace programme, the US spearheaded the creation of the International Atomic Energy Agency to help apply atomic energy to other areas of life such as agriculture and medicine, as well as the production of energy.

Over the next 20 years, much previously secret information and technology was shared around the world. For example, the US provided research reactors and enriched uranium to a wide variety of countries including Iran. There was a general tone of optimism over the future of civilian nuclear energy.

This laissez-faire attitude towards the “peaceful uses” of nuclear technology was even enshrined in Article IV of the 1970 Non-Proliferation of Nuclear Weapons treaty, which stated a “inalienable right” to “research, production and use of nuclear energy for peaceful purposes” and “the fullest possible exchange of equipment, materials and scientific and technological information for the peaceful uses of nuclear energy”.

The backlash

This distinction between the civilian and military uses of nuclear technology started to unravel after India exploded a nuclear device in May 1974. This device, supposedly nicknamed the Smiling Buddha, was technically developed outside of the agreements that India had with Canada, who supplied it with nuclear reactors, and the US, who supplied heavy water, needed to sustain a nuclear reaction in those reactors.

However, this outside nuclear assistance was crucial to the nuclear weapons programme. As Homi Sethna, chairman of the Indian Atomic Energy Commission between 1972 and 1983, later wrote: “The initial (nuclear) cooperation agreement itself has been the bedrock on which our nuclear programme has been built.”

Despite India’s attempts to brand the test a “peaceful nuclear explosion”, it set off a flurry of concern within the US as well as in other supplier countries including the Soviet Union, UK, and Canada. The worry was that if India, not seen as a developed nation, could produce a nuclear explosion on the back of nominally civilian nuclear assistance, so could others. The worldwide energy crisis of the early 1970s had led many countries to pursue nuclear energy and the Indian example made the growing spread of nuclear technology appear an ominous and menacing development.

The tenor of the debate had changed. For example, in 1975 a deal was announced in which West Germany agreed to provide Brazil with a complete civilian nuclear fuel cycle, potentially including the abilities to enrich uranium and reprocess plutonium. The New York Times called this “nuclear madness”.

A general reorientation in nuclear policy began. Since then, the nuclear story has generally been one of viewing civilian nuclear programmes as a pathway towards a military nuclear weapons capability. Restrictions on the transfer of and access to nuclear materials and technology have increased. For example, the US and other suppliers began to co-operate in the Nuclear Suppliers’ Group to restrict access to nuclear technology, for example by not exporting the technology for producing plutonium or enriching uranium.

Yet, academic research shows that the technical capability to enrich uranium is within reach of nearly all states. Although civilian power programmes increase the technical capacity of a state to build nuclear weapons, they have important countervailing political effects that limit the odds of the proliferation of nuclear weapons.

Despite this, public and media opinion seems tilted towards the view that the US or other supplier states can control the development of nuclear weapons through technical constraints. As such, it seems unlikely that the set of circumstances that produced the JCPOA in the first place – a deal built around restricting Iran’s nuclear capabilities – will happen again.The Conversation

Joseph O’Mahoney, Lecturer in Politics and International Relations, University of Reading

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Space can solve our looming resource crisis – but the space industry itself must be sustainable


Richard Matthews, University of Adelaide

Australia’s space industry is set to grow into a multibillion-dollar sector that could provide tens of thousands of jobs and help replenish the dwindling stocks of precious resources on Earth. But to make sure they don’t flame out prematurely, space companies need to learn some key lessons about sustainability.

Sustainability is often defined as meeting the needs of the present without compromising the ability of future generations to meet their own needs. Often this definition is linked to the economic need for growth. In our context, we link it to the social and material needs of our communities.

We cannot grow without limit. In 1972, the influential report The Limits to Growth argued that if society’s growth continued at projected rates, humans would experience a “sudden and uncontrollable decline in both population and industrial capacity” by 2070. Recent research from the University of Melbourne’s sustainability institute updated and reinforced these conclusions.

Our insatiable hunger for resources increases as we continue to strive to improve our way of life. But how does our resource use relate to the space industry?




Read more:
Dig deep: Australia’s mining know-how makes it the perfect $150m partner for NASA’s Moon and Mars shots


There are two ways we could try to avert this forecast collapse: we could change our behaviour from consumption to conservation, or we could find new sources to replenish our stocks of non-renewable resources. Space presents an opportunity to do the latter.

Asteroids provide an almost limitless opportunity to mine rare earth metals such as gold, cobalt, nickle and platinum, as well as the resources required for the future exploration of our solar system, such as water ice. Water ice is crucial to our further exploration efforts as it can be refined into liquid water, oxygen, and rocket fuel.

But for future space missions to top up our dwindling resources on Earth, our space industries themselves must be sustainable. That means building a sustainable culture in these industries as they grow.

How do we measure sustainability?

Triple bottom-line accounting is one of the most common ways to assess the sustainability of a company, based on three crucial areas of impact: social, environmental, and financial. A combined framework can be used to measure performance in these areas.

In 2006, UTS sustainable business researcher Suzanne Benn and her colleagues introduced a method for assessing the corporate sustainability of an organisation in the social and environmental areas. This work was extended in 2014 by her colleague Bruce Perrott to include the financial dimension.

This model allows the assessment of an organisation based on one of six levels of sustainability. The six stages, in order, are: rejection, non-responsiveness, compliance, efficiency, strategic proactivity, and the sustaining corporation.

Sustainability benchmarking the space industry

In my research, which I presented this week at the Australian Space Research Conference in Adelaide, I used these models to assess the sustainability of the American space company SpaceX.

Using freely available information about SpaceX, I benchmarked the company as compliant (level 3 of 6) within the sustainability framework.

While SpaceX has been innovative in designing ways to travel into space, this innovation has not been for environmental reasons. Instead, the company is focused on bringing down the cost of launches.

SpaceX also relies heavily on government contracts. Its profitability has been questioned by several analysts with the capital being raised through the use of loans and the sale of future tickets in the burgeoning space tourism industry. Such a transaction might be seen as an exercise in revenue generation, but accountants would classify such a sale as a liability.

The growing use of forward sales is a growing concern for the industry, with other tourism companies such as Virgin Galactic failing to secure growth. It has been reported that Virgin Galactic will run out of customers by 2023 due to the high costs associated with space travel.




Read more:
NASA and space tourists might be in our future but first we need to decide who can launch from Australia


SpaceX’s culture also rates poorly for sustainability. As at many startups, employees at SpaceX are known to work more than 80 hours a week without taking their mandatory breaks. This problem was the subject of a lawsuit settled in 2017. Such behaviour contravenes Goal 8 of the UN Sustainable Development Goals, which seeks to achieve “decent work for all”.

What’s next?

Australia is in a unique position. As the newest player in the global space industry, the investment opportunity is big. The federal government predicts that by 2030, the space sector could be a A$12 billion industry employing 20,000 people.

Presentations at the Australian Space Research Conference by the Australian Space Agency made one thing clear: regulation is coming. We can use this to gain a competitive edge.




Read more:
From tourism to terrorists, fast-moving space industries create new ethical challenges


By embedding sustainability principles into emerging space startups, we can avoid the economic cost of having to correct bad behaviours later.

We will gain the first-mover advantage on implementing these principles, which will in turn increase investor confidence and improve company valuations.

To ensure that the space sector can last long enough to provide real benefits for Australia and the world, its defining principle must be sustainability.The Conversation

Richard Matthews, Research Associate | Councillor, University of Adelaide

This article is republished from The Conversation under a Creative Commons license. Read the original article.

‘Transformer’ rooms and robo-furniture are set to remake our homes – and lives – before our eyes



The Ori ‘Cloud Bed’ is lifted and lowered from a ceiling recess to create space that doubles as bedroom and living room.
Ori/YouTube (screengrab)

Christian Tietz, UNSW

With two-thirds of a global population of 9.4 billion people expected to live in urban areas by 2050, we can expect a change in the domestic living arrangements we are familiar with today.

In high-density cities, the static apartment layouts with one function per room will become a luxury that cannot be maintained. The traditional notion of a dedicated living room, bedroom, bathroom or kitchen will no longer be economically or environmentally sustainable. Building stock will need to work harder.

The need to use building space more efficiently means adaptive and responsive domestic micro-environments will replace the old concept of static rooms within a private apartment.




Read more:
Urban density matters – but what does it mean?


These changes will reframe our idea of what home means, what we do in it, and how the home itself can support and help inhabitants with domestic living.

So how will these flexible spaces work?

Sidewalk Labs and IKEA are collaborating with Ori, a robotic furniture startup that emerged from the Massachusetts Institute of Technology, to transform our use of increasingly sparse urban living space. They have developed ways to enhance existing apartments with pre-manufactured standardised products to make living spaces flexible.

Leading product designers have produced tantalising concepts of how these newly developed products could enhance our lives in cities where space is at a premium. One example is based on a floor plan measuring just 3m by 3.5m.

Yves Béhar and MIT Media Lab’s design for a robotic furniture system for small apartments, which reconfigures itself for different functions.

The more intensive use of building space with hyper-dense living will have impacts on circulation spaces. It will require more services in tighter spaces and a vigilant eye on emergency evacuation pathways. Public space will be much more crowded and play a more important role in our well-being.




Read more:
People-friendly furniture in public places matters more than ever in today’s city


The robotic furniture that is available now could also help people with some form of impairment negotiate their home environment. An example is a bed that tilts up into a position that makes it easier to get out.

Some furniture now on the market has similar mechanically assisted functions to help people get out of a chair. This can be expanded into a broader range of facilitated living aids for people with physical and other impairments.

Ease of transformation is the key

Mobile furniture is not a new idea. The late 1980s and early 1990s spawned a whole range of mobile furniture, such as tables on wheels and sideboards with castors.

We have always tried to make rooms adaptable. Japanese screens or room dividers were one way. We have space-saving and transforming furniture from IKEA such as folded-up hallway tables that can become dining tables.

The idea of being able to transform our living space made these mobile furnishings enticing. But they all required a range of manual actions and this effort meant that, after a few initial experiments with them, they ended up in one static position. These mobile items became integrated and firmly located within the accumulations of things that make up our private sphere and who we are.




Read more:
Reinventing density: co-living, the second domestic revolution


Industrial designers such as the late Luigi Colani designed pre-manufactured dwellings with rotating interiors – but the ease of transformation is what really makes a difference now. It’s likely to have reverberating effects.

Luigi Colani’s Rotor House.

The term robotic furniture conjures up Jetsons-like images, but what this means is we will have adaptive spaces. Rooms will transform from bedroom into living room or from study into entertainment space at the touch of a button, a gesture, or a voice command.

While the videos (above) of beautifully designed spaces make the idea tantalisingly attractive, we need to bear in mind these are initial concepts, even though well-developed. But this heralds the beginning of an entirely new way of conceiving and inhabiting space. We have reached a time where everything is in flux.

The Ori Cloud Bed in action.

It introduces another element into our daily routine. The time it takes for the transformation to be completed plays a big role. Too slow and we think twice about it, too fast and it might knock a few things about. In the examples shown (above) they are workable and safe.

If we take this development a step further, the way our cupboards store and provide access to our things might be next in line for robotic optimisation.

It’s not just rooms that will be transformed

There are still questions to be answered. For example, will the speed of the spatial transformation taking place influence the speed of our personal routines, like the time we allow for our morning coffee routine before heading out the door?

How will these new flexible spaces affect our sense of belonging and feeling at home, when everything can change with a voice command?




Read more:
Control, cost and convenience determine how Australians use the technology in their homes


Robotically optimised homes might change culture in similar ways to how digital communications altered our conversations, social conduct, personal relationships, and behaviour.

The way we think about building and living in high-rise apartments, which we have done for hundreds of years, is about to take a turn. It could transform how we conceive of and inhabit vertical space.

Existing building typologies and the ways and means of how buildings are designed and developed will change entirely. This has the potential to have a massive and disruptive impact on real estate development, building design and regulation, construction methods, housing and social policy.The Conversation

Christian Tietz, Senior Lecturer in Industrial Design, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Regulating Facebook, Google and Amazon is hard given their bewildering complexity



Governments are attempting to regulate tech giants, but the digital disruption genie is already out of the bottle.
Shutterstock

Zac Rogers, Flinders University

Back in the 1990s – a lifetime ago in internet terms – the Spanish sociologist Manuel Castells published several books charting the rise of information networks. He predicted that in the networked age, more value would accrue in controlling flows of information than in controlling the content itself.

In other words, those who positioned themselves as network hubs – the routers and switchers of information – would become the gatekeepers of power in the digital age.

With the rise of internet juggernauts Google, Facebook, Amazon and others, this insight seems obvious now. But over the past two decades, a fundamentally new business model emerged which even Castells had not foreseen – one in which attracting users onto digital platforms takes precedence over everything else, including what the user might say, do, or buy on that platform.

Gathering information became the dominant imperative for tech giants – aided willingly by users charmed first by novelty, then by the convenience and self-expression afforded by being online. The result was an explosion of information, which online behemoths can collate and use for profit.




Read more:
Here’s how tech giants profit from invading our privacy, and how we can start taking it back


The sheer scale of this enterprise means that much of it is invisible to the everyday user. The big platforms are now so complex that their inner workings have become opaque even to their engineers and administrators. If the system is now so huge that not even those working within it can see the entire picture, then what hope do regulators or the public have?

Of course, governments are trying to fight back. The GDPR laws in Europe, the ACCC Digital Platforms report in Australia, and the DETOUR Act introduced to the US Congress in April – all are significant attempts to claw back some agency. At the same time, it is dawning on societies everywhere that these efforts, while crucial, are not enough.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


Gatekeepers reign supreme

If you think of the internet as a gigantic machine for sharing and copying information, then it becomes clear that the systems for sorting that information are vitally important. Think not just of Google’s search tool, but also of the way Google and Amazon dominate cloud computing – the largely invisible systems that make the internet usable.

Over time, these platforms have achieved greater and greater control over how information flows through them. But it is an unfamiliar type of control, increasingly involving autonomous, self-teaching systems that are increasingly inscrutable to humans.

Information gatekeeping is paramount, which is why platforms such as Google, Amazon and Facebook have risen to supremacy. But that doesn’t mean these platforms necessarily need to compete or collude with one another. The internet is truly enormous, a fact that has allowed each platform to become emperor of a growing niche: Google for search, Facebook for social, Amazon for retail, and so on. In each domain, they played the role of incumbent, disruptor, and innovator, all at the same time.

Now nobody competes with them. Whether you’re an individual, business, or government, if you need the internet, you need their services. The juggernauts of the networked age are structural.

Algorithms are running the show

For these platforms to stay on top, innovation is a constant requirement. As the job of sorting grows ever larger and more complex, we’re seeing the development of algorithms so advanced that their human creators have lost the capacity to understand their inner workings. And if the output satisfies the task at hand, the inner workings of the system are considered of minor importance.

Meanwhile, the litany of adverse effects are undeniable. This brave new machine-led world is eroding our capacity to identify, locate, and trust authoritative information, in favour of speed.

It’s true that the patient was already unwell; societies have been hollowed out by three decades of market fundamentalism. But as American tech historian George Dyson recently warned, self-replicating code is now out there in the cyber ecosystem. What began as a way for humans to coax others into desired behaviours now threatens to morph into nothing less than the manipulation of humans by machines.

The digital age has spurred enormous growth in research disciplines such as social psychology, behavioural economics, and neuroscience. They have yielded staggering insights into human cognition and behaviour, with potential uses that are far from benign.

Even if this effort had been founded with the best of intentions, accidents abound when fallible humans intervene in complex systems with fledgling ethical and legal underpinnings. Throw malign intentions into the mix – election interference, information warfare, online extremism – and the challenges only mount.

If you’re still thinking about digital technologies as tools – implying that you, the user, are in full control – you need to think again. The truth is that no one truly knows where self-replicating digital code will take us. You are the feedback, not the instruction.

Regulators don’t know where to start

A consensus is growing that regulatory intervention is urgently required to stave off further social disruption, and to bring democratic and legal oversight into the practices of the world’s largest monopolies. But, if Dyson is correct, the genie is already out of the bottle.

Entranced by the novelty and convenience of life online, we have unwittingly allowed silicon valley to pull off a “coup from above”. It is long past time that the ideology that informed this coup, and is now governing so much everyday human activity, is exposed to scrutiny.




Read more:
Explainer: what is surveillance capitalism and how does it shape our economy?


The challenges of the digital information age extend beyond monopolies and privacy. This regime of technologies was built by design without concerns about exploitation. Those vulnerabilities are extensive and will continue to be abused, and now that this tech is so intimately a part of daily life, its remediation should be pursued without fear or favour.

Yet legislative and regulatory intervention can only be effective if industry, governments and civil society combine to build, by design, a digital information age worthy of the name, which doesn’t leave us all open to exploitation.The Conversation

Zac Rogers, Research Lead, Jeff Bleich Centre for the US Alliance in Digital Technology, Security, and Governance, Flinders University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Here’s how tech giants profit from invading our privacy, and how we can start taking it back



Your online activity can be turned into an intimate portrait of your life – and used for profit.
Shutterstock.com

Katharine Kemp, UNSW

Australia’s consumer watchdog has recommended major changes to our consumer protection and privacy laws. If these reforms are adopted, consumers will have much more say about how we deal with Google, Facebook, and other businesses.

The proposals include a right to request erasure of our information; choices about whether we are tracked online and offline; potential penalties of A$10 million or more for companies that misuse our information or impose unfair privacy terms; and default settings that favour privacy.




Read more:
Consumer watchdog calls for new measures to combat Facebook and Google’s digital dominance


The report from the Australian Competition and Consumer Commission (ACCC) says consumers have growing concerns about the often invisible ways companies track us and disclose our information to third parties. At the same time, many consumers find privacy policies almost impossible to understand and feel they have no choice but to accept.

My latest research paper details how companies that trade in our personal data have incentives to conceal their true practices, so they can use vast quantities of data about us for profit without pushback from consumers. This can preserve companies’ market power, cause harm to consumers, and make it harder for other companies to compete on improved privacy.

The vicious cycle of privacy abuse.
Helen J. Robinson, Author provided

Privacy policies are broken

The ACCC report points out that privacy policies tend to be long, complex, hard to navigate, and often create obstacles to opting out of intrusive practices. Many of them are not informing consumers about what actually happens to their information or providing real choices.

Many consumers are unaware, for example, that Facebook can track their activity online when they are logged out, or even if they are not a Facebook user.




Read more:
Shadow profiles – Facebook knows about you, even if you’re not on Facebook


Some privacy policies are outright misleading. Last month, the US Federal Trade Commission settled with Facebook on a US$5 billion fine as a penalty for repeatedly misleading users about the fact that personal information could be accessed by third-party apps without the user’s consent, if a user’s Facebook “friend” gave consent.

If this fine sounds large, bear in mind that Facebook’s share price went up after the FTC approved the settlement.

The ACCC is now investigating privacy representations by Google and Facebook under the Australian Consumer Law, and has taken action against the medical appointment booking app Health Engine for allegedly misleading patients while it was selling their information to insurance brokers.

Nothing to hide…?

Consumers generally have very little idea about what information about them is actually collected online or disclosed to other companies, and how that can work to their disadvantage.

A recent report by the Consumer Policy Research Centre explained how companies most of us have never heard of – data aggregators, data brokers, data analysts, and so on – are trading in our personal information. These companies often collect thousands of data points on individuals from various companies we deal with, and use them to provide information about us to companies and political parties.

Data companies have sorted consumers into lists on the basis of sensitive details about their lifestyles, personal politics and even medical conditions, as revealed by reports by the ACCC and the US Federal Trade Commission. Say you’re a keen jogger, worried about your cholesterol, with broadly progressive political views and a particular interest in climate change – data companies know all this about you and much more besides.

So what, you might ask. If you’ve nothing to hide, you’ve nothing to lose, right? Not so. The more our personal information is collected, stored and disclosed to new parties, the more our risk of harm increases.

Potential harms include fraud and identity theft (suffered by 1 in 10 Australians); being charged higher retail prices, insurance premiums or interest rates on the basis of our online behaviour; and having our information combined with information from other sources to reveal intimate details about our health, financial status, relationships, political views, and even sexual activity.




Read more:
Why you might be paying more for your airfare than the person seated next to you


In written testimony to the US House of Representatives, legal scholar Frank Pasquale explained that data brokers have created lists of sexual assault victims, people with sexually transmitted diseases, Alzheimer’s, dementia, AIDS, sexual impotence or depression. There are also lists of “impulse buyers”, and lists of people who are known to be susceptible to particular types of advertising.

Major upgrades to Australian privacy laws

According to the ACCC, Australia’s privacy law is not protecting us from these harms, and falls well behind privacy protections consumers enjoy in comparable countries in the European Union, for example. This is bad for business too, because weak privacy protection undermines consumer trust.

Importantly, the ACCC’s proposed changes wouldn’t just apply to Google and Facebook, but to all companies governed by the Privacy Act, including retail and airline loyalty rewards schemes, media companies, and online marketplaces such as Amazon and eBay.

Australia’s privacy legislation (and most privacy policies) only protect our “personal information”. The ACCC says the definition of “personal information” needs to be clarified to include technical data like our IP addresses and device identifiers, which can be far more accurate in identifying us than our names or contact details.




Read more:
Explainer: what is surveillance capitalism and how does it shape our economy?


Whereas some companies currently keep our information for long periods, the ACCC says we should have a right to request erasure to limit the risks of harm, including from major data breaches and reidentification of anonymised data.

Companies should stop pre-ticking boxes in favour of intrusive practices such as location tracking and profiling. Default settings should favour privacy.

Currently, there is no law against “serious invasions of privacy” in Australia, and the Privacy Act gives individuals no direct right of action. According to the ACCC, this should change. It also supports plans to increase maximum corporate penalties under the Privacy Act from A$2.1 million to A$10 million (or 10% of turnover or three times the benefit, whichever is larger).

Increased deterrence from consumer protection laws

Our unfair contract terms law could be used to attack unfair terms imposed by privacy policies. The problem is, currently, this only means we can draw a line through unfair terms. The law should be amended to make unfair terms illegal and impose potential fines of A$10 million or more.

The ACCC also recommends Australia adopt a new law against “unfair trading practices”, similar to those used in other countries to tackle corporate wrongdoing including inadequate data security and exploitative terms of use.

So far, the government has acknowledged that reforms are needed but has not committed to making the recommended changes. The government’s 12-week consultation period on the recommendations ends on October 24, with submissions due by September 12.The Conversation

Katharine Kemp, Senior Lecturer, Faculty of Law, UNSW, and Co-Leader, ‘Data as a Source of Market Power’ Research Stream of The Allens Hub for Technology, Law and Innovation, UNSW

This article is republished from The Conversation under a Creative Commons license. Read the original article.