“Why is it so?!” is a science catch-cry from 20 years ago whose time may have come again. Those of us familiar with Julius Sumner Miller’s science programs on ABC TV (http://www.abc.net.au/science/features/whyisitso/) from 1963 to 1986 or his 1980s Cadbury commercials have his catch-phrase, “Why is it so?” etched into our minds. Sumner Miller would demonstrate some surprising physical phenomenon, exclaim “Why is it so?” and then go about exploring and revealing the underlying science.
I recently attended the National Steering Committee Meeting on Developing an Evidence Base for Science Engagement in Australia. This group is part of the action to implement recommendation 15 of the Inspiring Australia report, “That the national initiative support a program of research in science engagement – such as baseline and longitudinal and behavioural studies, activity audits, program evaluations and impact assessments – to inform future investment decisions by government and its partners.” As part of our discussions we concluded that around 10 of the 15 recommendations required evaluation to determine their potential or realised effectiveness.
At the end of the meeting we realised the widespread need and importance of evaluation and related measures but wondered how to get our message across to the decision makers in the funding agencies. How do we cast the last of 15 recommendations, the one which sounds like an arid accounting activity, as the foundation for most of the rest of the report? “Recommendation 15 is the Julius Sumner Miller question,” I said. Perhaps we can sell evaluation to the money people as the justification to ask ‘Why is it so?’ to every question of expenditure in the report.
Evaluation has many purposes. In the context of science communication it measures whether our activities change the way people engage with science. We observe a phenomenon of audience behaviour and ask “Why is it so?”. Then we investigate using a reliable way to gather and measure evidence and seek to formulate the science of what is happening. Evaluation may be a form of ‘market research’ but its potential is far beyond the meanings associated with that term. Evaluation is still in its early days.
I’m pondering whether evaluation will be to science communication as peer review is to the scientific process. Science communication has some peer reviewed journals but science communication research is a small part of our overall work to make science accessible. Perhaps not everything needs to be evaluated there are plenty of activities that would benefit from a rigorous evaluation of expectations and outcomes.
Peer review evolved gradually with the scientific process over the last few hundred years. I suspect that evaluation and the wider field of evidence based measures are at an early stage of the development of their species. They will mature through advancements in behavioural psychology, the use of increasingly insightful interview techniques and a deeper understanding and more rigorous application of statistical analysis.
I welcome your thoughts on this. All comments will be carefully evaluated.