Monday, October 13, 2008

The Field of Scoring

This (Northern) summer I attended the DMAW / AFP Bridge Conference in Washington DC, following the IoF National Convention and via a week ‘off’ in France.

I say a week ‘off’ because, whilst in France, I stayed with a friend, Ken Burnett. You may have heard of Ken from his work and books on fundraising including the essential Relationship Fundraising, which I reckon first came out in about 1893. Over a century later, it is still very relevant and promotes a donor-centred approach to maximising lifetime value; i.e. be nice to your donors and they will give you more money in the long term.

Ken actually has a life beyond fundraising. At his place we spent some time walking around his field. But this time the field is a star of his new book, The Field by the River. This is his first non-fundraising published book—a sort of cross between A Year in Provence and Jed Bartlett’s love for trivia. If you haven’t seen The West Wing don’t worry about that reference. If you did see it, liked Jed and you like nature, you will love ‘The Field’.

During my visit, Ken was obsessed with his ‘score’ or rank on Amazon.co.uk. He was checking it pretty much every hour—as I’m sure you and I would be if we had just had a book published—and it was the main topic of all conversations.

Actually, not all, I exaggerate—just breakfast, morning tea, lunch, dinner, after dinner and a quick check of Amazon before going to bed. To be fair, it was the very first week of sales, I am sure he is not checking every hour this week …

So, what has this got to do with fundraising? And how does it bring in British fundraising stalwarts, Tony Elischer (of Think!) and Tim Hunter (of NSPCC)? Well you didn’t know it was going to bring in Tony and Tim but now you do. They only get tiny cameos though.
At the IoF Convention in the bar (of course) with Tony and Tim talking shop (that’s it for Tim, told you it was a little cameo). Tony brought up the ‘scores’ of my speaking sessions at the Resource Alliance’s shockingly named International Workshop for Resource Mobilisation—IWRM—a couple of months previous, in Malaysia.

I had presented a few sessions in Malaysia that had gone down very well, according to the scores, and Tony ‘ranked’ me—i.e. ‘You came in above blah blah’. Mind you, he did leave himself out of the list—I am sure he did very well too.

And this week I got my ‘scores’ from IoF too. As an egotistical, needy, must-be-loved, ‘please like me’ person this is really important to me, and I hope my mum finds out without me having to tell her. But really, the whole scoring system is seriously flawed.

When Tony first told me about the ‘scores’, I accidentally forgot about my ego and instead got on a soapbox about scoring. (OK, it was still ego—just manifesting itself differently). And here it is.
Conference organisers need an objective way of measuring their speakers’ performances. They really need to be able to get the best speakers back and establish a system of good speakers to build their reputation. Seems obvious.

The problem comes from a conflict between the only real purpose of a fundraising conference—empowering individuals to make more money for their organisations—and the fact that people want to enjoy themselves, be entertained, laugh and learn. If you ask, we will always say learning is our priority. But this is not reflected in the scores.

Basically, most people score emotionally. They will score a speaker higher if they like her or him, they will score higher if they have fun. But worst of all they will score them lower if they disagree with the speaker. The last point presents a huge challenge for any speaker trying to challenge the prevailing paradigm, which of course we need to do if we are to create change.

Now bear in mind a few things:

  • Most fundraising conference speakers are not paid and are not professional speakers.

  • A large proportion of conference attendees are usually first-timers and people new to fundraising.

  • Scores are usually very generous—speakers rarely get the worst possible ranking.

  • Session attendance can vary from a dozen to hundreds, so a couple of out-lying scores can have a dramatic impact for some people.
As I mentioned before, I have just got my ‘scores’ from the IoF. The marks for each session were broken into two parts—one for ‘content’ and one for ‘presentation’. Content and presentation scores were always within 1% of each other—and yet the marks between different sessions varied much more. So in other words, most people weren’t really distinguishing between content and presentation.

The best learning comes from highly skilled teachers or trainers who really know their stuff, give direct practical information, are willing to adequately prepare before the conference and are engaging and fun. The problem is they are very rare—hence the international conference ‘circuit’ has the same old names cropping up.

So what can we do about it?

Well, we need a paradigm shift about how we ‘score’ at conferences and the decision-making process about how we invite people back.

The organisers of the key fundraising conferences should come together to develop effective and standardised conference evaluation policies and procedures. This includes organisations such as the Institute of Fundraising (UK), Resource Alliance, Association of Fundraising Professionals, Fundraising Institute of Australia, CAF and Fundraising and Philanthropy magazine. Leave egos behind and do what’s right for the fundraising community. They should develop:

  1. A much more thorough and data-driven approach to the conference and speaker evaluations. The attendees are customers and should be analysed like a charity would analyse its donors.

  2. A way to gather more data on attendees—before the conference—to find out about their years of experience, area of work, etc. and then analyse it to see who goes to which sessions.

  3. A process whereby conference organisers can log who came to what session to put post conference feedback in context.

  4. An evaluation system that has more emphasis on learning outcomes. This means scoring sessions on more than just content and presentation.

  5. A process that measures ‘scores’ in grades of ten rather than four. Marks out of ten (rather than ‘poor, ok, good, excellent’) create more significant gaps on measures and also make it easier to provide feedback to speakers and conference organisers.

  6. Evaluation forms that collect more useful written feedback and send this to speakers.

  7. A process that takes into account attendance numbers when looking at average scores.

  8. A process for following up all attendees using something like Survey Monkey to ask people what they have done with the learning.
Once these processes are in place, we then need a better way to publish and share scores, information and analysis amongst speakers and conference organisers. The challenge is to get speakers to disavow anonymity and share their results with each other, conference organisers and even with conference participants.

Those are some of my ideas—but I know you have lots of experience as a conference attendee or speaker, so I would very much welcome your views and ideas.

Anyway, when I wrote this, The Field by the River is ‘scoring’: ‘Amazon.co.uk Sales Rank: 3,486 in Books’, the highest I have seen. Oh, Year in Provence is 3,883 and #1 at Amazon was‘Chinese Food made easy’.

When it was about 4,000 I asked Ken what that means in book sales. ‘I have no idea, but it was 7,433 yesterday …’

Bloody typical, no one seems to measure the right thing.

Sean

No comments:

Disaster Fundraising Guide download it here