Measuring public attitudes to science

Responding to Carol’s very valid point about how we measure the effectiveness of science communication, one reason we don’t is that we are seldom given enough budget to do so properly. The budget rarely covers the communication itself, let alone in-depth impact assessment.

That said, let me outline two methods I have found to work.

At CSIRO in the good old days we used to triangulate public opinion and attitudes to science by a combination of public opinion research (quantitative), media analysis (not just counting stories or web hits but analysing content, placement, reach etc) and customer/community value analysis (qualitative research). That was all we could afford to do, but it gave us a handle on what the public knew/did not know about the agency and its science, and what they thought of it and where the gaps were. It’s OK – but it’s expensive for the average small science outfit. It is of course nothing like what large corporations or political parties do to measure if their messages are getting through….

Prompted largely by the GM food debate, with eminent statistician Dr Nick Fisher, I developed a new technique called Reading The Public Mind (RtPM), which has the added advantages of operating in real time, taking a ‘movie’ of public opinion about scientific issues (instead of costly snapshots from opinion polling, which go out of date quickly) and of identifying the drivers of public interest and concern about new science and technologies. This provides very effective ‘early warning’ of how the public is liable to react to a new scientific advance or piece of technology – and the ability to adapt the technology or the communication to suit.

We have been running this for two years as a research project in the Invasive Animal CRC, with remarkably consistent results. We are keen to test it in other areas of science.

Traditionally, most communicators use things like media monitoring or opinion surveys to judge impact. But media monitoring only tells you how many stories you got – not what the public did with the knowledge it had gained, which is what scientists, managers and communicators really want to know. RtPM does tell you this.

For example, we noticed a trend in the public to underestimate the damage caused by rabbits in the Aust environment. The CRC did a public awareness blitz – and we saw a clear response in RtPM as rabbit awareness rose again in our largely-urban population. Likewise camels were rated very low as a pest by most Australians – until the DKCRC camel report hit the headlines. Bingo, camels leapt in significance in the minds of Australians, as a pest that needs to be controlled for the sake of the landscape etc.

We are fairly confident, from this and other indications we now have a tool which will:

– Improve the rate of science adoption by forewarning research agencies about how their work is likely to be received by the public, so they can adapt it

– Stop the waste of scientific resources on research projects that will never deliver an outcome the public is prepared to accept

– Measure the effectiveness of communication activity in real time, allowing constant adjustment of the strategy to take in shifts in public attitude.

– Work for almost any major scientific issue in which the public has an interest.(nanotech, biotech, nuclear, geosequestration, stem cells, xenografts…you name it)

– Give science agencies an argument for increased funding, by demonstrating to politicians and bureaucrats that the public (or industry) actually wants what they are turning out.

I am happy to provide further detail of RtPM to any communicator who is seriously interested.

Cheers

Julian

Julian Cribb FTSE

Julian Cribb & Associates

ph +61 (0)2 6242 8770 or 0418 639 245

http://www.sciencealert.com.au/jca.html

www.scinews.com.au

From: asc-list-bounces@lists.asc.asn.au [mailto:asc-list-bounces@lists.asc.asn.au] On Behalf Of Peter Quiddington Sent: Friday, 4 June 2010 1:10 PM To: asc-list@lists.asc.asn.au Subject: Re: [ASC-list] World class

Just in relation to Carol’s point about measuring the impact of science communication. It is obviously a difficult ask, especially in terms of pinning down causes and effects. However, there seems to be a very strong correlation between the level of scientific literacy in Australia (measured by PISA as per below), and the standard of science communication.

This is no smoking gun, but it is an interesting correspondence, the fact that Australia ranks near the top of the world, in terms of the level of debate regarding science communication, the size of the profession, and the extent of science content in the (general and specialist) media.

During the same period that this has emerged, there has also been a growth in the level of scientific literacy. (OK, many will bemoan that there has also been a fall in particular disciplinary scientific skills, maths, chemistry etc). But the level of general scientific literacy is high, and getting higher, and the fact that Australia boasts a high level of science coverage (esp. ABC, magazines, etc) is probably a factor.

So, what is needed now is some more focused research on what are the causes of this rise in scientific literacy, and what is the role of the media, etc.

Data to support this …

Thomson, S., & De Bortoli, L. (2008). Exploring Scientific Literacy: How Australia Measures Up – The PISA 2006 Survey of Students’ Scientific, reading and mathematical skills. Victoria. (Download report http://www.acer.edu.au/ozpisa/scientific-literacy-in-pisa-2006

On Fri, Jun 4, 2010 at 12:39 PM, Carol Oliver wrote:

As a past science journalist I totally get what Julian is saying, and somewhat sympathetic to Peter’s view. However, we live in the net-geners age. For me the challenge is not to address what are now perhaps relatively narrow audiences (the boss, the Government, and the converted) but in understanding the impact of the Internet on science news distribution.

The US National Science Foundation has been reporting for some time now that the Internet is the medium of choice when people seek news and information about science. While these figures do not reflect the situation in Australia, there are some indicators that suggest these numbers also apply here. The most important question of all, though, is whether science news alone is an effective strategy in disseminating news about science’s advances. How do we know what is an effective strategy, and how do we measure it? I’m still gobsmacked by how much time and money is put into communicating science without knowing if the objectives were even partly achieved among the audience(s) intended (and why those objectives were set in the first place). Given science is a data-driven enterprise, it is surprising how little data exists in the effectiveness of science communication. I definitely stand here waiting for someone to correct me by pointing to the gobs of substantial data I have been missing over the years. The literature tends to support the critical lack of data.

At the ASC conference in Canberra in February, a rather pointed remark was made: “How have we got away with it so long?” Perhaps the answer is we just don’t think about it – the boss wants media space and it is the role of the communicator to get it. Or perhaps it is done, but the results are proprietary.

Don’t get me wrong. I think science news in the media has an important role to play. I’m just not aware of what that is exactly given the Internet Age.

Best, Carol

Dr Carol Oliver Australian Centre for Astrobiology University of New South Wales Room 130, Biological Sciences Building Kensington, NSW 2052, Australia

Phone: (+61) 02 9385 2061 Cell: (+61) 0417 477 612

________________________________________ From: asc-list-bounces@lists.asc.asn.au [asc-list-bounces@lists.asc.asn.au] On Behalf Of Peter Quiddington [pquiddin@une.edu.au] Sent: Friday, June 04, 2010 11:13 AM

To: asc-list@lists.asc.asn.au Subject: Re: [ASC-list] World class

Well, I agree and very much disagree with Julian, having spent my time grinding away at the daily coalface, I know that restricting the use of descriptors is silly. Also, these little pearls not only fall from the lips of old hacks like ourselves, but are often employed by scientists. And, why not? The truth is that any really good quality research that makes a genuine advance is by definition a world-first, and descriptions such as ‘ground breaking’ and ‘cutting edge’ are not out of place. At the same time, I think the general notion that editors are by and large disinterest in science, only its impacts, is somewhat flawed, and increasingly outdated. This is not (altogether) my experience; most need to be shown how and why a piece of research is novel, counter-intuitive, odd, strange, or potentially revolutionary in its future impacts. They need to be shown that the research has uncovered some new essential truth, a new fact of reality, or a new avenue for the human imagination to grapple with in order to address the dilemmas facing humanity, etc etc… The use of terms like ‘ground breaking’ and ‘world beating’ is no longer useful in this task. In the world of journalism, these terms lost their currency long ago through overuse, misuse and abuse. We simply need a fresh crop of superlatives. So, all suggestions welcome.

Leave a Reply