Support for democracy and trust in politicians is falling. We hear a lot about evidence-based policy as a way to stem this decline, but less about how that evidence should be generated.
One idea that may generate the type of evidence that will help make more informed decisions appears, paradoxically, fairly unpopular with the punters.
Perhaps the problem is that not enough has been done to explain to the public what this idea – carefully testing new policies on small groups first – might mean in practice.
In a new paper just released, we show that we may still be a long way off adopting this practice.
There is an emerging view that there should be much greater use of evaluations of public policies, including randomised controlled trials (RCTs), to test the effectiveness of new policies before they are rolled out. This applies particularly to policies or programs for which there is limited or no evidence about their likely impact.
RCTs have been around for years in medicine and other sciences, and are increasingly being used by small and large companies to test products and services. Conceptually they are simple, although implementing one can be complex. A RCT involves selecting a sample from a population of interest and randomly dividing them into two groups (using the equivalent of a coin toss). One group is given an intervention (that is, a program or policy) and the other is not. If the RCT has been done properly, the differences in the outcomes of the two groups tells us the impact of the intervention being trialled.
There are other ways to try to measure causation, and some are necessary when an RCT isn’t possible. However, Shadow Assistant Treasurer Andrew Leigh argues in his new book Randomistas that:
Researchers have spent years thinking about how best to come up with credible comparison groups, but the benchmark to which they keep returning is the randomised controlled trial. There’s simply no better way to determine the counterfactual than to randomly allocate participants into two groups: one that gets the treatment, and another that does not.
While there is strong support within the policy and research community on the important role of trials and evaluations, we know far less about what the general public thinks about how policies should be implemented and to what extent they should be trialled before widespread introduction.
In a survey undertaken as part of the ANUPoll series, we ran an online survey experiment that measured the level of support for trials in general and RCTs in particular. We also looked at the factors that influence that support, and whether there is a causal relationship between expert opinion, party identification and support for an RCT.
That is, we ran an RCT on RCTs.
As part of the survey, we asked respondents to “consider a hypothetical proposal to reform” in one of five policy areas (school education; early childhood education; health; policing; support for those seeking employment). We then asked “which of the following approaches do you think the government should take?”:
- Introduce the policy for everyone in Australia at the same time
- Introduce the policy to everyone, but do it in stages
- Trial on a small segment of the population who need it most, or
- Trial on a small segment of the population chosen randomly,
We found that more people want new government policies rolled out without testing – except for jobless support.
Some key findings emerge:
There is a roughly even split between those who think a new policy should be introduced to everyone at once and those who think it should be trialled on a small segment of the population.
Respondents support trials for employment policies the most strongly but are most likely to support an RCT for a policy related to school education. They are least likely to support it for health service delivery and employment support.
Those who live in disadvantaged areas and those with low levels of education are the least supportive of RCTs.
What influence do experts’ views have?
The type of policy that is being proposed clearly matters for whether the general public thinks it should be trialled as part of an RCT. However, the views of those outside the political system also matter. We tested this potential effect by randomly varying the wording of the question across respondents.
One “treatment” that we applied to the question was to vary what respondents were told on whether experts generally support the policy, are generally opposed to the policy, or are divided on the policy (with one-third of respondents given each of the options).
The greatest support for a trial in general or an RCT in particular occurs when experts are generally opposed to the policy. Conversely, the least support for a trial or an RCT comes when experts are generally in support of the policy, implying respondents believe sufficient evidence must already exist. Support is somewhere in between when there is variation in support.
This has implications, we think, for researchers engaged in policy debates. One potential effect of arguing publicly for a different point of view to policymakers or other researchers is to increase the level of support for trials among the general population. We should make a case for uncertainty when it does exist, as that would appear to increase support for future gathering of evidence.
Indeed, this advocacy for uncertainty has underpinned the push for greater trials and evaluations in policy (and the social sciences).
It is clear that RCTs are likely to be increasingly used by policymakers to test the effect of policy interventions. However, to be truly effective and to avoid a backlash, RCTs need to be supported not only by researchers and policymakers but also by the general public. At first glance, this buy-in is a long way off.
Nicholas Biddle, Associate Professor, ANU College of Arts and Social Sciences, Australian National University and Matthew Gray, Director, ANU Centre for Social Research and Methods, Australian National University