Thank you to Joan Leach for the President’s update.
I’m writing this with my leg in a compression bandage covered in ice with a grade 2 tear in my medial gastrocnemius. I’m told that this is a characteristically Australian injury; many people get it at the beach when they have one foot buried in sand and then try to move too quickly and *pop* goes the calf muscle. Sadly, I didn’t get this injury on a luxurious beach holiday (or even a morning walk on the beach). But, from the moment I hit the pavement to now, there has been a rush of experts to tell me what my treatment regime should be. Shall I head to the GP for a referral for an MRI? Will I head straight to the physiotherapist for an assessment and some time on a TENS machine. Another expert has told me that TENS machines rely on a discredited ‘gateway’ theory of pain—no way they are going to work. A biochemist colleague tells me that getting some electrolyte balance is going to help healing (bring on the bananas?). This rather minor injury, though making me grumpy, does illustrate something important about expertise. Since this injury has put me on ice for a morning, I spent the time reading some science blogs on scientific controversy. Without exception, every controversial science topic forces one to take a position on the nature of experts. Which kind of expert do we want in a particular case—one that knows everything about the science? One that knows about the application of the science? One that knows about the context of the application? And on we go. My answer to the ‘many experts’ problem has always been to say ‘YES’; I want to know what they all say and then I’ll form my view. But my morning with the blogs (and my own helpful experts opining on my calf muscle) show me the folly of my thinking. Sometimes you have to choose your expertise and that changes what you think the controversy is about. For me, I chose the physio—not because I like the idea of an outmoded theory guiding my treatment (and to be fair, no TENS machine made an appearance though I saw one in a corner gathering dust)—but because an MRI seemed excessive, time-consuming, and unneeded. But, I’m not sure I can justify any of that. And I’m not sure that many of the blogs I read justified the reliance on expertise that they touted. Because we’re in the business of communicating, at least some of the time, what experts say maybe we should be a bit more forthright about why we pick the experts we do. Let’s hope my hunches turn out to be justified, but I may need to develop a better framework for consulting experts.
Thinking about evaluation
Also while on ice, I got to think about a nice piece that Jackie Randles, Inspiring Australia Manager for NSW wrote for the Inspiring Australia newsletter. In a conversation earlier in the month, she was reflecting on the challenges of evaluation and said one reason she hears a lot for why institutions don’t evaluate their communication programs is that they seem to be doing fine—people come, they seem to have a good time, the event has a good reputation and the organisation has little reason to change it. Why evaluate? I’d just like to tick off my ‘top 3’ to answer this one:
- Because you may not be as successful as you think; evaluation is an opportunity to get it even ‘more right’.
- Evaluating and sharing that evaluation can be both an advertisement for your good work and an encouragement for others to raise the bar on their activities.
- Because what your organisation thinks is important and working now could radically change; what is your plan going forward?
To think a bit more about this, Professor Nancy Longnecker (University of Otago) has given us some take-home messages from her work on evaluation.
Now, back to the ice…