In the last installment of this increasingly bonkers blog series, we looked at audiences - who are awards for, anyway? That's the first half of the story - the second half, what is an award supposed to accomplish?
In the world of marketing, we chuck around a lot of esoteric vocabularity - things like 'awareness', 'market share', 'salience', etc - when it comes to establishing objectives. But the easiest way of getting to the heart of the matter is simply asking this: "What does success look like?"
Does it look like every child in America eating your brand of delicious cookie? Does it look like your type of toothpaste shelved in a special rack, at the end of the aisle? Does it look like a long line on opening night? Or does it look like your boss being happy?
This particular blog post speaks about two things - what success looks like (objectives) and how we see it (measurement).
(Oh, also? It is long.)
Success looks like... a lot of people talking about the books.
For the sake of this blog post, think about the oldest device in marketing: a funnel (as a diagram, not a drinking aid). At the top end is awareness, then consideration, then, finally, the pointy bit at the bottom is purchase. People know about the book, people want the book, people buy the book. That easy!
Therefore, generating awareness is a solid, and necessary, first step. Let's make some noise! And, in a sense, everything else falls out of this - unless an award is heard, it can't reasonably expect to do anything else.
The good news: measurement is easy! Web hits, blog posts, Google volume, media profile, social media mentions - there are a lot of established ways of tracking how much noise an award makes.
For example, here's the Google Trends analysis of the Hugo Awards over the past 12 months. This is probably the least scientific of all the above measurements - for one, there's a term "redshirts" in football that means Scalzi's book is getting a lot of false credit during training season. But you can see that the peaks (which are indexed, by the way) are all taking place about when you'd expect: announcements of shortlist and winner. If the award overshadows the book on the whole, that's no real surprise - the award has a lot of categories and finalists, the book is just one thing.
You can also see a spike where Redshirts overtakes the Hugo Awards when the news of the TV series came out. And, similarly, there are also spikes for the award that don't involve the book - see, for example, the second-largest spike on the graph: the recent Jonathan Ross debacle.
The bad news: see example above, this can very easily become the wrong noise. Referring back to the earlier post about criteria, there's a difference between awareness of the books and awareness of the award. These are two very different objectives, and they need to be measured separately. Bolded because ZOMG. If success looks like having a famous award, that's a very different thing from successfully getting more people talking about books. Also - a dangerous thing to conflate. When researching awards on Google Trends, the biggest spikes are the Jonathan Ross FAIL, Christopher Priest excoriating the Clarke Award, the Clarke's gender-related controversy and the complete dissolution of the BFS. (You can see the Clarke and BFS 'peaks' here.)
This is why "what success looks like" is so important. On one hand, the BFS can claim 2009 was their most successful year ever. On the other, it kind of sucked. Ditto the Clarke in 2012 (and arguably 2013). Awareness, yes. Awareness of the books? No.
So why would having a famous award ever be a good thing? Let's not over-think this too much - a famous award gets more attention, and that does (as seen with the Hugos) spill over into the books. The BFS had a 'terrible' 2009, but that was a watershed moment: it then received more attention - positive attention - in later years. (And, let's not forget, prior to 2009, no one had any idea who or what it was. All publicity isn't good publicity, but if you can struggle through it, it may be better than nothing...)
The lesson: awareness is a valuable, measurable objective - but make sure you are clear on what you're making people aware of (and why).
Success looks like... more people wanting the books.
We're into the middle of the funnel here. Consideration is bridging the gap between knowing about the book and wanting it - this means, roughly, it breaks down in two, interlinked ways:
a) Giving people compelling reasons to buy the book. This comes down to being able to demonstrate why the book is the best/most epic/progressive/whatever. This is tough for awards to do themselves, but they can (hopefully) generate that kind of conversation amongst bloggers, reviewers, readers and retailers. Still, that's tough - which is why most awards skip straight to the second method, which is...
b) Being a trustworthy source of information. This is the idea of awards as being high quality recommendations. You trust an award to give you good books. The award's reputation, perception, level of awareness (see above) all factor in here. I trust more what the Man Booker prize says than something that's voted on by a convention in Topeka, Kansas. Tough, but true.
What does consideration look like? If awareness is "the DGLA shortlist is out and Brent Weeks is one it", consideration is "oooh, that Brent Weeks book sounds good". Measuring consideration is considerably tougher than measuring awareness - it isn't about volume, it is about content (and content quality). Traditionally, this is where surveys come in - finding some sort of baseline measure of how much people are understanding or interested in something, then seeing how that measure moves over time. Social media monitoring is also useful here: not just casual searches, but tools like Sysomos that track sentiment as well.
For purely anecdotal evidence, there are a few behaviours that work as indicators as well - book reviews off the back of the awards, for example. When Martin Lewis reviews the BSFA shortlist, we can assume he's going to generate some consideration (unless he hates them all). An award having a presence in retail or in libraries - also a driver for consideration - when Forbidden Planet put up a Kitschies display (bless 'em) that's not going to guarantee an increase in sales, but it will guarantee an increase in people thinking about the book. (Technically, that's still awareness, but awareness that close to 'point of purchase' is pretty much consideration - especially with physical goods. Just roll with it.)
I think there's also a different, but equally important, approach to consideration, that is: success looks like... a better career for the winning author. This is a back-handed way of talking about awards as a means of literary recognition, verifiable proof that the author is "good". This, in turn, means that the editors that commissioned and edited them are "good", the publisher that pays them is "good", etc. Awards as a sector-wide thumbs up.
How do we measure this? Pretty much the same way - reviews and sentiment, but probably also some longer term factors. "x won the y award then got a different/better/new book deal" - an award can claim that sort of behaviour (true or not) as a measurement of its success against this objective.
Success looks likes... more people buying the books.
This is the Holy Grail of awards - and, let's be horribly honest, also the least likely to be achieved. There are a couple barriers here:
a) Purchasing is very, very bottom of the 'funnel' of behaviour. This is, essentially, the sum total of everything an award needs to do: be heard, be a respected recommendation and drive people to action. Keeping that in mind, it may be more reasonable for awards to break this down into 'mini-objectives', rather than leap blindly towards the big chalupa.
b) And it is blind. Measuring sales is a pain in the ass for publishers and retailers. Tools exist, of course, but even those are approximations. BookScan, for example, measures - by best guess - something around 80% of the market, (worse in genre, because the non-BookScan stores are generally independent or specialist retailers ). Nor does it include ebooks, which, oops. Other sources include Amazon, who famously only give their data to Hugh Howey.
Certainly sales can be measured, and with some degree of accuracy, but then there are further issues with the data: attribution, for example, or even making a noticeable difference. But even the best case scenario relies on an award making do with anecdotal or second-hand data from authors or publishers.
Which brings us to measurement - certainly an award can capture some data on its own. For example, an award with an Amazon Affiliates account will know any time someone purchases a book from one of their links. (Downside: independent bookstores, probably not so happy.) But the bulk of the measurement is, as noted above, going to be anecdotal: reliant on numbers from publishers and retailers.
However, if sales are the prime (or only) objective, there are at least two ways we've seen of going about it:
a) The Booker model. The Man Booker prize is one of the few awards than can hand-on-heart promise an increase in sale. How do they do it? A ton of money - and not even their own. Check the terms and conditions - every shortlisted publisher contributes £5,000 towards 'general publicity', and the winner an additional £5,000. The Booker prize works because having £30,000 (on top of their existing, sterling publicity efforts) generates a lot of awareness. That, plus being an established, respected prize (consideration), means they're geared up convert consumers into shoppers as smoothly as possible. Plus, they've the practical relationships in place - a Booker finalist has a table waiting for it in the front of Waterstones and a front-page call-out on Amazon. This is everything an award can dream of doing. It works so well, in fact, that the prize insists that publishers have at least 1,000 copies ready for any longlisted title.
As the UK's most famous prize, this also gives us a decent 'ceiling' for expectations of sales success. 1,000 copies for a Booker longlisting isn't totally unreasonable - historically, it looks like 600+ can be expected - with shortlisting and winning offering exponentially increasing benefits. This sounds great, but once you knock it down to genre awards, it sets rather grim expectations. These figures come from everything going their way - retail warmth, infinite respect and massive trade and press coverage. Not to mention that (minimum) £30,000 marketing spend and a privately retained top-tier PR agency.
By contrast, your average genre award has... some enthusiastic volunteers. If any genre award got 1% of the Booker's sales results, I'd eat its trophy. And before you get snippy, remember that 1% of, say, Booker winner Wolf Hall would mean 2,000 incremental sales. In hardcover, no less.
b) Events. The other way for an award to sell books is to get people into bookstores and put books in their hands. There are a lot of reasons for awards to get into events (another post, perhaps?), and the fact that this is a way to actually, measurably flog books is one of them. The downside: this behaviour is 'secondary' - it has nothing to do with being an award (that is, voters or juries selecting books based on some sort of criteria).
If an award is truly committed to measuring itself by the number of books it sells, it either needs to do its core mission really, really, really well (as in the Booker) or... do something else entirely.
Scary stuff, isn't it?*
There are, however, two mitigating factors. First, we know awards do cause sales - I personally have bought a dozen books this year purely based on shortlists (Carnegie and Waterstones, in fact). But, again, we're caught in the sticky syrup of anecdotal evidence - how much of this actually happens, and how do we measure it?
Second, awards also may generate 'indirect' sales - does an award help the author's next book? Or previous books? Or give a book a 'longer tail' than it would otherwise? Winning an award helps grease that awareness/consideration/sales funnel for the long-term. We can all (generously) assume that this happens, but proving it would be impossible. The overall lesson? Sales are a nice thing to drive, but if they're the prize's primary motivation, that award is going to be in trouble.
4) Stakeholder satisfaction.
Success looks like... a happy membership.
Organisational prizes dominate SF/F, and their objective may not be funnel-related at all. Membership organisations with their own prizes include the Hugos (WorldCon), Nebulas (SFWA), r/fantasy, Tor.com, BFS, BSFA, random con in Topeka, the list goes on and on... the Guardian's "Not the Booker" is an interesting one, as it is essentially an annual game for their most active commenters.
Take the Nebulas, for example. What's a more accurate definition of success - the SFWA being happy with the list... or everyone else in the world being happy with the list? I suspect the SFWA, like most organisations, would leap to say the latter... yet the former is actually the truth of the matter. I don't mean to pick on the SFWA, the same behaviour is true for many, if not all, of these organisations. A BSFA Award that makes the BSFA happy leads to an engaged, healthy membership. A BSFA Award that doesn't make the BSFA happy leads to grumpy, departing members. (Another good example - the BFS: where the Awards Administrator has announced that the number of votes cast has gone up year-on-year again, a positive sign.)
Arguably the organisations on the extremes - those with the biggest and smallest communities - are the most self-aware. r/fantasy and Tor.com's 'best of' lists don't even pretend to be 'for' anyone but themselves. Similarly, 'random con in Topeka' has its own award as a means of keeping its members entertained for a half hour - it isn't trying to shift books or change the face of literature.
How do we measure it? Again, sentiment is hard to measure, but the advantage of membership awards is that we can see the volume of engagement. For example, metrics like % of membership voting, number of nominations, number of votes cast, number of comments on forums or blog posts and people attending the ceremony. Plus there are always qualitiative measurements such as feedback surveys. For most of the membership organisations above, the award is the largest (or only) event of the calendar year - opinions will come, and in volume.
I use 'membership' because it is the most common stakeholder group for genre awards, but not the only one (as discussed previously). Awards have boards, sponsors, judges and backers. Success can look like more funding or a new sponsor. It can be a matter of happy judges (see Lou Morgan's post on this topic, earlier this week). It is worth noting that the more targeted the audience, the easier it is to observe and to measure success.
I don't pretend that this (despite the word count) is an exhaustive study of the many ways that awards can be successful. Nor do I believe that most awards have either a single objective or a single audience... However, the goal here is to help frame the discussion. If an award has clear criteria, we are better able to discuss the books it chooses. Similarly, if an award has a clear purpose, we can evaluate how well it is achieving that purpose.
At this point, I think I've done all the conceptual gibbering that I'm going to do - the next step, if I can get folks to play, is to invite in others to have their say. I'm especially interested in what people involved with awards - as judges, organisers or stakeholders - have to share.
*Ok. One other thing about sales, and I apologise as I've harped on this before - it is extremely difficult to make the case for awards as a positive financial return on investment for publishers. Let's take, for example, the World Fantasy Award, which is pretty standard: no submission fee, but no digital submissions. The cost of six books (6 x £3 printing = £18) plus shipping (£40, based on 3 UK, 3 US judges) comes out to around £58. Which isn't a vast amount. (Again, this excludes any value on time spent and/or opportunity cost). But to make that a positive ROI, that award submission would need to generate something like 30 incremental retail sales (based on a £2 margin between publisher and wholesaler; if you're talking something like 99p Amazon sales, that margin is actually more like 15p).
And that's for the 5% of the titles that make the shortlist - the other 95% are just writing that money off entirely. This means that, if 100 books are submitted, each of the five shortlisted titles would need to sell 600 incremental copies for the publishers (as a collective), to break even. [Around the same number as the Booker projects for its longlist. Go figure.] And it isn't like that money is going to the general betterment of literature: it is being dumped in the postal service. EBOOK SUBMISSIONS, PEOPLE. FFS.