The other week I had one of those fun weeks where the situations and people are so different from one day to the next, I was at risk of getting whiplash. On the one hand, I was lucky enough to join Anne McMurray and a group of people running interesting SenseMaker projects in Northern Ireland. There was a real sense of collaboration and support between the groups as well as an excitement about exploring what the new tools were offering. Powerful new research in healthcare, third sector and social policy using tools that are only just becoming available.
A few days later, I was talking with academic researchers, some of whom were cut from less adventurous cloth. One particular thing struck me that's a simple, but profound, fix for people who are looking at engagement or satisfaction in an audience.
One of the slides that was put up was addressing the satisfaction (or otherwise) of students. From memory, the results were being taken from the National Survey of Students. Some of the discussion ran around struggling to make faculty directors realise that 80% was not a good enough satisfaction rating because that means that 20% of our students are dissatisfied.
There are many flaws in that to begin with - starting with what that statistic actually says (as opposed to what is being claimed), whether students should feel completely satisfied, etc etc.
It's interesting to me that, if you're measuring satisfaction (or happiness or engagement or...) it's usually done on a straight 1-5 or similar (Likert) scale with good/bad at either end of the scale. So everyone knows you're looking for scores as close as possible to one end of the scale.
(Of course, it ignores the possibility that their attitude might change depending on the time of day/who they're with/what they're doing, etc.) It gives a neat, tidy number that research designers everyone can understand intuitively. But some people just don't give 5's. Others always go to extremes. It's not long before the meaning supposedly buried in these rigorously numerical, deeply analysed numbers starts to fall apart.
And, for decision-makers, the simplistic focus on increasing the number ("What do we do about the other 20 per cent?) is tempting. And misleading. Set up like this, the problem is essentially Zeno's Paradox of Achilles and the Tortoise. You will never get to 100%, but are judging yourself on your ability to do so.
One approach to start with would be to swap out that numerical scale. Instead of having your ideal at one end, put it in the middle of the scale. At either end you put two negatives - one is the complete absence of whatever you're looking at and the other is too much of it.
For student satisfaction on the quality of their course, a scale might be:
The subject matter was so easy as to be pointless ____________________________The subject matter was so esoteric, it was demotivating
So the data comes back a) actually telling you something about why it's not perfectly in the middle and b) without the snap criticism of "how do we fix the other XX per cent?"