Victorian Premier Daniel Andrews has announced a suite of reforms to the state’s political donations system. It includes:
a cap on donations by individuals, unions and corporations of A$4,000 over a four-year parliamentary term;
public disclosure of donations above $1,000;
a ban on foreign donations; and
real-time disclosure of donations.
Harsh penalties will be imposed on those who breach the rules, with fines of up to $44,000 and two years in jail.
These proposals follow several dubious events, including Liberal Party fundraiser Barrie Macmillan allegedly seeking to funnel donations from a mafia boss to the party after Opposition Leader Matthew Guy enjoyed a lobster dinner with the mafia leader.
… help put an end to individuals and corporations attempting to buy influence in Victorian politics.
Are these reforms good?
The proposed reforms will significantly improve Victoria’s donations system.
The caps on donations will level the playing field and reduce the risk of corruption in the state’s political system. It will prevent rich donors from exerting greater influence over politicians than those who lack the means to do so. Parties will no longer be able to rely on these wealthy donors to fund their election campaigns.
The caps equally target individuals, unions and corporations, meaning that money cannot be channelled through shady corporate structures to evade the rules. However, donations can still be channelled through the federal level, where there are no caps.
Real-time disclosures, which have already been introduced in Queensland, will improve the timeliness of disclosures. Combined with the lower disclosure threshold of $1,000, these are commendable steps towards enhancing transparency.
Election campaigns are currently funded by a mix of public funding and private donations. As there will be caps on private donations, public funding of Victorian elections from taxpayers’ pockets will need to increase.
There will be debate as to the level of public funding that should be given. Public funding should adequately compensate parties, but not be overly generous or allow them to rort the system.
Detractors may argue that, in the age of social media, there may be cheaper ways for political parties to get their messages across, so less public funding would be needed.
It is tricky to work out how to allocate public funding between established political parties, minor parties and new parties. There is also a question of whether public funding should cover activities such as policy development and party administration.
But public funding is already part of Australia’s system. In the 2016 federal election, $62.8 million of public funding was provided, which is about half of federal campaign costs.
Victoria’s move toward more public funding is not unprecedented. New South Wales already has caps on political donations of $5,800 per party and $2,500 for candidates, as well as a ban on donations from property developers and those in the tobacco, liquor and gambling industries. This was accompanied by an increase in public funding of elections, amounting to about 80% of campaign costs.
In Europe and Canada, there are high levels of public funding: between 50% and 90% of costs.
Another worry is that enterprising people and businesses might still circumvent the rules through creative means.
In the US, super PACs (political action committees) are special interest groups involved in fundraising and campaigning that are not officially affiliated with political parties. These groups can raise unlimited sums of money from corporations, unions, associations and individuals, and then spend this money to overtly advocate for or against political candidates.
If this possibility is not regulated in Australian jurisdictions, then our system will remain broken.
How can we improve our national system?
Australia’s political donations system remains fragmented. Ideally, we would have a uniform system with tough rules at both the federal and state levels, so that donors cannot easily evade the rules by channelling their money through more lax jurisdictions.
Labor senator Sam Dastyari has called for a full ban on all political donations from individuals and corporations. Dastyari is no stranger to this issue: he was forced to resign from the shadow frontbench in 2016 following revelations that a Chinese company paid his travel expenses.
Opposition Leader Bill Shorten has said he is not in favour of Dastyari’s position:
When it comes to donations, I don’t think the taxpayer is ready to foot the bill for all political expenses in Australia, so I still think there is a role for donations.
Both Labor and and the Liberal Party are in favour of banning foreign political donations, but not all donations generally.
The key issue with political donations is whether large donations secure greater access to politicians than ordinary people have.
Another issue is whether large donations sway politicians to bestow illegitimate favours or adopt policies that directly benefit donors.
Dastyari was the Labor Party’s chief fundraiser in New South Wales from 2010 to 2013. He explained that some donors give money for philanthropic reasons or to support an ideological cause, while those in ethnic communities may donate as a sign of prestige. But he also explained:
Frankly, some people do it because of that very, very murky world of access. And they want access for outcomes.
The suggestion is that it’s possible to “buy” political access and influence through political donations.
The managing director of Transfield Holdings, Luca Belgiorno-Nettis, has likened political donations to the Latin saying do ut des: “you give in order to have given back”.
Where will the money come from?
Campaigning for election is expensive. To promote their cause, political parties tend to spend big bucks on high-impact slots on TV and radio, travel extensively, and perhaps hire fancy political consultants.
Membership of Australia’s political parties has declined over the years, so they’re now less able to raise money from membership fees. Parties do receive some public funding, but not enough to pay for an expensive election campaign. This has led to the parties being very reliant on political donations.
If we ban all donations from individuals and corporations, funding for political campaigns must come from elsewhere. Public funding of elections would need to increase, meaning taxpayers would bear a bigger burden in funding elections.
The current level of federal public funding is about half of what an election campaign costs. In some parts of Europe and in Canada, the level of public funding of elections is higher, amounting to between 50% and 90% of costs.
There are challenges in calculating how much public funding should be allocated to parties, including the entitlement of new or micro-parties.
Any regulation of political donations needs to be consistent with the Constitution. Australia has a constitutionally protected freedom to communicate on political matters.
A ban on donations limits political communication by restricting the source of funds available to political parties and candidates to meet the costs of political communication.
The High Court has ruled that any limitations on the freedom of political communication must be proportionate and have a legitimate purpose. Banning donations would seem to have a legitimate purpose: to reduce undue influence on Australian politics and public policy. But it is difficult to predict how the court would rule on proportionality.
The High Court has not previously ruled on a complete ban on political donations, but it has held that caps on donations are constitutional.
Is this a good idea?
Dastyari’s proposal would definitely even up the playing field. It would eliminate the perception and reality that rich donors are able to “buy” access or influence in politics.
Besides fully banning donations, another option is to have a low cap on donations of, say, A$1,000. For example, NSW has a yearly cap of $5,800 per party and $2,500 for candidates. This would also level the playing field and reduce the influence of rich donors.
Dastyari is right: it is time to take action on the murky world of political donations. Let’s hope the government will heed the call for change.
Legitimate questions certainly arise from the weekend outrage in London, but they are not those immediately provoked by Pauline Hanson or Donald Trump.
The US president’s impetuous reaction was to tweet that the attack on London Bridge and the Borough Market proved that American courts should “give us back our rights. We need the Travel Ban as an extra level of safety!” Note the exemplary use of the exclamation mark. However, Trump did have the grace eight minutes later to offer a form of condolence to the British people – “WE ARE WITH YOU. GOD BLESS!”
The capitals presumably mean either that he was shouting or that he really means it. Not so the One Nation leader, who chose to use Twitter to desecrate the warning from the British authorities for people to “run, hide and tell” by declaring that it was time to “stop Islamic immigration before it is too late”.
Labor’s Penny Wong rightly declared Hanson’s eructation “irresponsible and crass”. One of Australia’s foremost counter-terrorism experts, Greg Barton of Deakin University, went further, telling me that what the One Nation leader was saying was “downright dangerous” on at least two counts.
One, in this age of postmodern terrorism, Islamic State operates as the first metaphysical nation with no dependence on physical territory or traditional communication to wield its power. In that environment, the security authorities rely on tips from the communities from which impressionable operatives emerge.
Maligning those very communities, Barton says, tends to make its members turn inward, reducing their trust in the authorities and diminishing the likelihood that they will report the wayward behaviour of people they know. Witness the bizarre spectacle of the Manchester bomber, Salman Abedi, praying loudly in the street.
Second, it encourages the very sense of alienation, the feeling that they are stigmatised outsiders, that leads people to lose their sense of belonging. That makes them more vulnerable to the brutal siren call of murderous extremists.
Hanson either does not know this or does not care, because it is likely that her anti-Muslim message, basically a reworking of her initial hostility to Aborigines and then to Asians, appeals to much of One Nation’s base. What more would you expect from a person who over two decades has used the public purse to turn politics into a highly successful small business?
There are legitimate questions, though, about this latest attack in the UK, the third in as many months. One is whether Britain has a peculiar problem when it comes to these apparently autonomous acts of ghastly violence. The other is whether the London Bridge/Borough Market attack had anything to do with the UK election, now only days away.
The answer to the latter is probably not. As Barton points out, if the perpetrators had wanted to influence voters, they or their sponsors would have made a statement to that effect in some form, either direct or allusive.
That is not to say that the violence of Saturday night won’t affect the result of Thursday’s poll. Conventional analysis has it that assaults on security tend to favour the incumbent, especially if they are from the centre right.
Theresa May’s Tories consistently poll as “better for” national security than Jeremy Corbyn’s Labour Party. But this has not been a conventional UK election campaign and there are also questions about whether a sense may take root within the electorate that the government is failing to protect the community, following two fatal acts of terrorism in just a fortnight – Manchester and now London. May was, after all, home secretary, responsible for domestic security, for six years before she became prime minister.
She has not had a good election. Gone are the days, less than two months ago, when it looked as if she could gain a majority of 100 in the House of Commons, knocking Corbyn for six. Her refusal to engage with Corbyn was seen as arrogant, and UK voters are sick of going to the polls (three times in less than two years). There was also her blunder on a “dementia” tax, essentially a proposal to make the elderly contribute to their health care if they have combined assets of more than £100,000.
Immediate public outcry forced a U-turn, but the damage had been done. As campaign managers would say, May had gone “off-message”. The election was no longer a plebiscite on her managing of Brexit, but an argument about health and welfare, traditional Labour turf.
It was a surprising mistake, especially given that as a political up-and-comer May warned the Conservatives back in 2002 that it had become the “nasty party”. Its base was “too narrow” and on occasion so were its sympathies, a sermon this child of the manse had clearly forgotten delivering.
On the question of security, the message from the voters is decidedly mixed. In the wake of the Manchester attack Corbyn boldly, but deliberately, stated:
Many … professionals in our intelligence and security services have pointed to the connections between wars our government has supported … and terrorism here at home.
I have been here with the G7, working with other international leaders to fight terrorism. At the same time, Jeremy Corbyn has said that terror attacks in Britain are our own fault.
Corbyn was “not up to the job”, she said. He also faced criticism from within his own ranks, but it seems May’s decision to play the security card was not as effective as she might have hoped, because the opinion polls continued to tighten in Labour’s favour.
None of this means May will lose when the votes come in on Thursday. Rather, it shows that national security is a more complex issue in the UK these days, after a decade and a half of unpopular wars and years punctuated by regular, fatal terrorist attacks.
It is not clear whether the story is the same in either the United States or Australia. It is possible this is one way the UK is grimly unique.
Politics in 1950s Adelaide was a gentlemanly affair. The premier, Thomas Playford, and Labor’s Mick O’Halloran faced each other in four election campaigns between 1950 and 1959. More surprisingly, they dined together each week to discuss Playford’s future plans for South Australia, and often praised each other publicly.
O’Halloran remained Labor leader until he died in 1960. Playford wept openly when told of the death, and was a pallbearer and speaker at O’Halloran’s state funeral.
To contemporary eyes it is not surprising that the victorious Playford – the longest-serving party leader in postwar Australian history – remained leader, but more unusual that O’Halloran also remained leader without serious challenge through four losing elections.
In the decades after the second world war, losing an election was not necessarily grounds for a leader being replaced or challenged. Federal Labor leaders Bert Evatt and Arthur Calwell and Victorians Clive Stoneham and Clyde Holding all lost three successive elections while remaining in place. In contrast, only one party leader since the 1980s (Rob Borbidge, Queensland Nationals) has survived to suffer three or more electoral defeats.
Until at least the 1970s, the major route to party leadership was through seniority, and patience was considered a virtue. When Harold Holt became prime minister in 1966, he proudly told his wife:
I climbed over no-one’s dead body to get here.
In Western Australia, Charles Court “desperately” wanted to be premier, but he was “unbelievably patient”, waiting until his long-reigning predecessor, David Brand, retired for health reasons.
Brand’s successor as premier, Labor’s John Tonkin, did not become leader until he was 63, having been deputy for 15 years, and then became premier when aged 69. Some of his junior colleagues suggested he might step down for someone younger, but he neatly deflected them, and open challenge did not occur to them.
The emphasis on seniority and patience had its costs. It denied some of the most able people their chance to lead.
Leadership is increasingly temporary
The way times have changed is exemplified in the frequency of party coups against sitting prime ministers.
Robert Menzies was the first prime minister to be overthrown by his own party, in 1941. It was another 30 years before it happened again – when John Gorton fell in March 1971 – and then 20 years until Paul Keating defeated Bob Hawke in December 1991.
So, in the century up to 2010, three sitting prime ministers were victims of party coups. Then, in just five years, three more followed – Kevin Rudd was defeated by Julia Gillard in June 2010; Rudd then defeated Gillard to resume the prime ministership three years later, in June 2013; and most recently Malcolm Turnbull defeated Tony Abbott, in September 2015.
The pace and pressure of contemporary society is one reason for the greater turnover of leaders. Of the 17 postwar leaders who led their party continuously for 12 years or more, ten became leader in 1960 or before, and only three (Bob Carr, Mike Rann and John Howard) became leader after 1980.
The fact that leadership has become more precarious and conditional is starkly confirmed by trends in length of tenure. Those who became party leader before 1970 averaged eight years and six months in the role, while those who became leader from 1970 on averaged just under half that: four years exactly.
Similarly, those who became leader before 1970 fought 3.0 elections, on average. Those from 1970 on averaged just 1.2 elections as leader. Some states have moved from a three-year to a four-year election cycle, but that is only a very small part of the explanation.
The more temporary nature of party leadership is clear from these figures, but they only start to capture the greater ruthlessness. A successful leader can still lead the party to several elections, but an unsuccessful (or not likely to be successful) leader is much more quickly disposed of.
In recent decades, fewer than three in ten losing leaders led their party into the next election, in contrast to six in ten in the 1950s and 1960s.
Challenges became increasingly pre-emptive: among those who became leader from 1990 onwards, one-quarter (20 of 78) were ousted by their colleagues before they had fought a single election.
Of the 55 post-war leaders who became leader before 1970, their leaderships finished predominantly for personal rather than political reasons. Almost one in five (ten) actually died in the role, the last such death being Queensland Country Party Premier Jack Pizzey in 1968, the second-last being Harold Holt, who drowned the previous December.
If we combine those dying in office, those who retired as a result of old age, those who resigned because of a medical reason or for personal reasons, the total is 55 per cent of all the pre-1970 leaders. Since then, all those reasons combined account for just 10% of leaders’ departures.
The rise of partyroom coups
The reasons for leaderships ending in recent decades are much more political. 30% resigned either after an election loss or because of poor electoral prospects, compared with 15% of the earlier group, while almost half were forcibly displaced by their own party.
Of leaders whose tenure began after 1970 and finished by 2016, almost half (68 out of 138) were victims of party coups. It has become the single most-common means by which leaderships end.
Taking just the two major parties at federal and state level (plus the Queensland Nationals, which were the major party in that state), there were no successful leadership challenges in the 1960s. But since 1970, there have been fully 73, and the rate has been accelerating. In the 17 years of this century there have already been 32.
The increasing frequency of leadership coups has not made them any less disruptive. In process, they are often fraught by uncertainty and crisis and sometimes these, the most personal of political conflicts, produce enduring legacies of bitterness and internal division.
Despite having their roots in parties’ greater electoral pragmatism, the majority are followed by electoral failure rather than success.
How do we account for forces and events that paved the way for the emergence of Islamic State? Our series on the jihadist group’s origins tries to address this question by looking at the interplay of historical and social forces that led to its advent.
In the penultimate article of the series, Harith Bin Ramli traces the Muslim world’s growing disaffection with its rulers through the 20th century and how it created the climate for both the genesis of Islamic State and its continuing success in recruiting followers.
Islamic State (IS) declared its re-establishment of the caliphate on June 29, 2014, almost exactly 100 years after the heir to the Austro-Hungarian Empire, Archduke Franz Ferdinand, was assassinated. Ferdinand’s death set off a series of events that would lead to the first world war and the fall of three great multinational world empires: Austro-Hungary (1867-1918), Russian (1721-1917) and the Ottoman Sultanate (1299-1922).
That IS’s leadership chose to declare its caliphate so close to the anniversary of Ferdinand’s assassination may not entirely be a coincidence. In a sense, the two events are connected.
In declaring the resurrection of a medieval political institution almost exactly 100 years later, IS was announcing its explicit rejection of the modern international system based on that very idea of sovereignty.
Other than the Ottoman dynasty’s very late and disputed claim to the title, no attempt has been made to re-establish a caliphate since the fall of the Abbasid dynasty at the hands of the Mongols in 1258. In other words, Sunni Islam has carried on for hundreds of years since the 13th century without the need for a central political figurehead.
The Abbasid caliphs began to lose power from the mid-ninth century, effectively becoming puppets of various warlords by the tenth. And the caliphate underwent a serious process of decentralisation at the same time.
Key contemporary texts on statecraft, such as Abu al-Hasan al-Mawardi’s (952-1058) Ordinances of Government (al-Ahkam al-sultaniyya), described the caliph as the necessary symbolic figurehead providing constitutional legitimacy for the real rulers – emirs or sultans – whose power was based on military might.
As in the case of the Shi’i Buyid dynasty (934-1048), these rulers didn’t even have to be Sunni. And they were often expected to provide legislation based on practical and functional, rather than religious, considerations.
The Muslim world, then, had arguably already experienced secularisation of sorts before the modern age. Or, at the very least, it had for quite some time existed within a political system that balanced power between religious and worldly interests.
And when the caliphate came to an end in the 13th century, both the institutions of kingship and the religious courts (run by the scholar-jurists) were able to carry on functioning without difficulty.
It was the 19th-century Muslim revivalist and anti-colonial movement known as Pan-Islamism that was responsible for reviving the Ottoman claim to the caliphate. And the idea was revived again briefly in early 20th-century British India as the anti-colonial Khilafat movement.
But anti-colonial efforts after the fall of the Ottoman Empire, even those primarily based on religious beliefs, have rarely called for a return of the caliphate.
If anything, successors of Pan-Islamism, such as the Muslim Brotherhood, have generally worked within the framework of nation states. Putting aside doubts about their actual ability to commit to democracy and secularism, such movements have generally envisioned an Islamic state along more modern lines, with room for political participation and elections.
Modern utopias and old dynasties
So why evoke the caliphate in the first place? The simple answer is that it has never been completely dismissed as an option.
In Sunni law and political theology, once consensus over an issue has been reached, it is hard for later generations to go against it. This was why Egyptian scholar Ali Abd al-Raziq was removed from his post at Al-Azhar University and attacked for introducing a deviant interpretation after he wrote an argument for a secular interpretation of the caliphate in 1925.
As manyrecentstudies show, the idea of the caliphate and its revival has had a certain utopian appeal for a wide spectrum of modern Muslim thinkers. And not just those with authoritarian or militant inclinations.
But, in practice, the dominant tendency here too has really been to seek the liberation or revival of Muslim societies within the nation-state framework.
If anything, national aspirations and the desire to modernise society existed before the formation of the new political order after the first world war. The majority of the populations of Muslim lands welcomed the fall of the three empires, or at least didn’t feel very strongly about the survival of traditional ruling dynasties.
And, with the exception of Saudi Arabia, most dynasties that stayed in power did so by reinventing their states along modern, mainly secular, models.
But this did not always succeed. The waves of revolutions and military coups that swept the Middle East and other parts of the Muslim world throughout the 1950s and 1960s amply illustrate that popular sentiment identified traditional dynasties with the continuing influence of colonial powers.
In Egypt, under the Muhammad Ali dynasty (1805-1952), for example, the control of the then-French Canal epitomised the interdependent relationship between the dynasty and Western power. This was why Gamal Abdel Nasser (1918-1970) made great efforts to regain it in the name of Egyptian sovereignty when he became the country’s second president in 1956.
Dissolving political legitimacy
Either way, the success of the new Muslim nation states could be said to be predicated on two major expectations. The first was improvement of citizens’ lives – not only in terms of material progress, but also the benefits of freedom and the ability to represent the popular will through participatory politics.
The second was the ability of Muslim nations to unite against outside interference and commit to the liberation of Palestine. On both counts, the latter half of the 20th century witnessed abysmal failures and an increasing sense of frustration with Muslim leaders.
In many places, populism eventually gave way to authoritarianism. And the loss of further lands to Israel in the 1967 Six-Day War revealed the inherent weakness and lack of unity among the new Muslim nations.
Anwar Sadat’s peace treaty with Israel after the 1973 Yom Kippur War was widely seen as an act of betrayal, for breaking ranks in what should have been a united front. His decision to do so despite lacking popular support in Egypt only revealed the extent to which the country had evolved into a dictatorship.
Sadat’s consequent assassination at the hands of a small radical splinter group of religious militants acted as a warning to other Muslim leaders. Now they couldn’t simply ignore or lock away religious critics, even if the majority of the population still subscribed to the secular nation-state model.
Throughout the late 1970s and 1980s, Muslim leaders around the world increasingly made compromises with religious reactionary forces, allowing them to expand influence in the public sphere. In many cases, these leaders increasingly adopted religious rhetoric themselves.
Showing support for fellow Muslims in the Soviet-Afghan War (1979-1987) or the First Palestinian Intifada provided an opportunity to manage the threat of religious radicalism. National leaders probably also saw this as an effective way to deflect attention from the authoritarian nature of many Muslim states.
The Gulf War also brought non-Muslim troops to Arabian soil, inspiring Osama bin Laden’s call for jihad against the Western nations that participated in it. And it eventually led to the US invasion of Iraq. That set off a chain of events that created in the country the chaotic conditions that enabled the rise of Islamic State.
If IS’s leadership is really an alliance between ex-Ba’athist generals and an offshoot of al-Qaeda, as has often been depicted, then we don’t have to go far beyond the events of this war to explain how the group formed. But the rise of Islamic State and its declaration of the caliphate can also be read as part of a wider story that has unfolded since the formation of modern nation states in the Muslim world.
As some commentators have pointed out, it’s not so much the Sykes-Picot agreement and the drawing of artificial national borders by colonial powers that brought about IS.
The modern nation-state model – as much as it’s based on a kind of fiction – is still strong in most parts of the Muslim world. And, I believe, it’s still the preferred option for most Muslims today.
But the long century that has passed since the first world war has been increasingly marked by frustration. It’s littered with the broken promises of Muslim rulers to bring about a transition to more representative forms of government. And it has been marked by a sense that Western powers continue to control and manipulate events in the region, in a way that doesn’t always represent the best interests of Muslim societies.
An extreme high point of frustration was reached in the events of the so-called Arab Spring. The wave of popular demonstrations against the autocratic regimes of the Arab world were seen as the first winds of change that would bring about democracy to the region.
But, with the possible exception of Tunisia, all of these countries underwent either destabilisation (Libya, Syria), the return of military rule (Egypt) or the further clamping down on civil rights (Saudi Arabia, Bahrain and other Gulf monarchies).
I would hesitate to describe IS’s declaration of a caliphate as a serious challenge to the modern nation-state model. But the small, albeit substantial, stream of followers it manages to recruit daily shows it would be wrong to take for granted that the terms of the international order can simply be dictated from above forever.
When brute force increasingly has the final say over how people live their lives, it becomes harder for them to differentiate between the lesser of two evils.