Are we making an impact with science communication?

By Craig Cormick and Arwen Cross

Community concerns about wind farms and vaccines have led to a discussion about why some people have strong fears of adverse reactions, and why their perception of risk doesn’t align with those of scientists. As Janet McCalmun wrote recently:

Their problem is a problem with science, and science has a something of a problem with them.

Both sides have a problem which could potentially be addressed by better science communication that worked to include all sides of such debates rather than polarising them, and used evaluation to measure impact and improve.

There are many good arguments for raising community understanding of science. These include a knowledge of science being useful in daily life (such as determining which medical advice is more sound), the economic benefits (a skilled workforce is good for the national economy), the cultural benefits (that it is fulfilling to know about science, history or music), or even democratic benefits (an informed society can make better decisions). Let’s call this 20th century thinking.

More recent arguments say that people should be engaged early in the directions and outcomes of scientific research, as key stakeholders/tax payers/beneficiaries. Let’s call this 21st century thinking.

But is the question a discrepancy between 20th and 21st century thinking as Jenni Metcalfe has suggested? Or is it more about better matching science communication strategies with different audiences, based on evidence? Because if we’re going to debate the best way to communicate science to the public, we must use that key tool of scientific research – evidence!

Our current strategies aren’t working

Regardless of how we argue about the need for it, a key problem with science communication is that, across many measures, and with many audiences, it doesn’t seem to be working very well. Consider the impacts of science communication activities on climate change, vaccinations or GM foods for example. Many people are analysing the problem, and offering solutions, but too many don’t realise they are offering up a single piece of the jigsaw, rather than the whole puzzle.

Quite often, we science communicators are using many strategies to reach one audience – too often the most easily reached and already converted – when we actually need different strategies for different audiences, defined by their interests, values, topics of concern, or who they trust.

Brand science might work for some people, but for others it’s a turn off. Trying to target any single “general public” can prove confounding. It is well understood that we all have different attitudes to, and preferences for, politics, football and TV shows, so why should we presume everyone is going to want the same types of messages about science?

Towards the goal of making science communication more relevant and useful to more people, perhaps we need to consider that domain-specific information is more approachable and useful to some people than treating science as a brand name.

And while many people profess an interest in science and technology, up to 40% say they do not regularly search for or find science and technology stories, and the majority of people gain whatever science information they receive simply in passing, according to a current CSIRO study to be completed later this year.

Another problem with talking to the converted is the inadvertent outcome of polarising oppositional points of view. Online media, in particular, tends not only to reach more polarised groups who support science on a particular issue or reject it, but can actually contribute to their polarisation through reinforcing existing ideas, as people tend to self-select media that conforms with their views. How well do we really understand the finding that information is more likely to be accepted or rejected based on whether is accords with people’s existing ideas, not on how well we craft it?

And what do we really know about whether different communication is better coming from a scientist or a science communicator? There is an argument that trying to drive more scientists to communicate can lead to a lot of well-intentioned amateur efforts from time-poor and jargon-rich experts, but for some scientists are clearly more credible.

Then we should consider the science communicators themselves, who are also a diverse group. Many work for institutions where their work has a role in attracting public or commercial investments for projects. They are primarily responsible for maintaining an organisational brand, using 20th century thinking, and have no incentives to undertake 21st century engagement thinking, and receive little or no rewards for doing so. Shouldn’t they be judged against their own objectives rather than other people’s?

Let’s try and be a little clearer as to whether our goals are to increase scientific literacy, or organisational awareness, or science engagement?

And what exactly does science literacy mean anyway? Are we talking about knowing a certain set of facts or principles. Or are we talking about being able to think more critically about evidence, or even being able to actively take part in decisions about the directions and outcomes of scientific research?

So how can we communicate better?

Certainly the methods and the evidence of science need to come across in some of our communications more. That means we can’t just talk about final outcomes and black and white results. We sometimes need to explain where consensus exists, and where it doesn’t. We sometimes need to make things more accessible. We sometimes need to more openly reflect on the nature of science. We sometimes need to hand over some of the decision-making to the public. And sometimes we need to shut up about science, and talk ethics and social context or the technological uses it can be put to.

Good examples of this have occurred, such as in the public debate on embryonic stem cell research, where moderate and reasoned discussion even drowned out the voices of hysteria and hyperbole.

Public attitude research from Biotechnology Australia had shown that attitudes were driven by a complex value chain, influenced by an individual’s personal moral position, the source of the embryonic stem cells, the benefits of the technology and levels of social trust. The data showed that in 2005 almost 80 per cent of Australians were aware of using embryonic stem cells for medical research, and this research was supported by 63.5% of the public, compared to 24.5% against it, with the remainder undecided.

Using this knowledge public debate by scientific-organisations tended to focus more on the ethics and values of stem cell research, rather than explaining the science.

By 2007, when the dust had mostly settled, approval ratings for embryonic stem cell research had risen from 63.5% to 76% and those opposed had dropped from 24.5% to 20%. And those most in favour tended to be categorised as technophiles with a strong support for uses of science and technology.

Similar values-based communications, such as understanding differing world-views on the relationship between technological development and nature, have been applied in some instances to public debates on GM foods, nanotechnology and climate change. Effective communication efforts have sought to frame discussions in terms of the values that the public are applying to the issues, rather than those of scientists. But such examples tend to be too few and too far between.

To engage more people with science we need to provide more than infotainment or calls for people to change their behaviour in light of scientific evidence. We need to understand better that science communication can be many things to many different people – as is as evidenced from the many Inspiring Australia working group reports. It can suit organisational needs or civic needs. It can be done for public good or private benefit. It can involve informing, educating or engaging. And it can be used to provide answers or to raise more questions.

But are we making an impact?

The real question of concern that should drive science communication activities should be: is it making an impact? It’s very easy to believe that they are when everybody cheers when you blow something up, or the science fans rave about how interesting it was. But that’s a long way from actually contributing to scientific literacy in a sustained way. And being honest, we rarely measure the long-term impacts we really need to be measuring.

Yet if we don’t evaluate our impact we risk becoming our own worst enemies.

To quote Professor of Law at Yale University Dan Kahan, who has undertaken significant work on how our cultural biases impact our thinking:

Not only do too many science communicators ignore evidence about what does and doesn’t work. Way way too many also shoot from the hip in a completely fact-free, imagination-run-wild way in formulating communication strategies. If they don’t rely entirely on their own personal experience mixed with introspection, they simply reach into the grab bag of decision science mechanisms (it’s vast), picking and choosing, mixing and matching, and in the end presenting what is really just an elaborate just-so story on what the ‘problem’ is and how to ‘solve’ it.

That’s not science. It’s pseudo-science.

This is an extended version of an article that was published at The Conversation. Read the original article.