Being an indie author has a lot of benefits. One weakness though, is that we do not have extensive and reliable data on consumer behavior around which we can configure our marketing strategies. Most of our efforts to position our wares are based on anecdotal successes reported by other authors (i.e., here’s what I did that worked for me), or just groping about in the dark trying to find anything that works. It would be nice to have more information.
Bestselling author Marie Force headed up a reader survey effort that has yielded some interesting results. You should click over and read the whole thing, but I’ll tell you the bits I thought were most interesting:
Some things that don’t matter to readers:
Having a publisher’s seal of approval on a book only mattered to 3.66% of the respondents.
Bestseller status doesn’t matter very much to readers.
Endorsements by well-known authors don’t have much impact.
Nobody seems to care about book trailers. A slight majority have never watched one, and most say it does not influence their purchasing decisions.
Here are some things that do matter:
Freebies work. Not only do the vast majority of the respondents say they have discovered authors they otherwise never would have, but also indicate they would be likely to buy another book by that author if they liked the freebie.
Facebook works. It crushes all the other social media as a way for readers to find and follow authors.
Reviews are important, but readers pay far more attention to other reader reviews on retail sites than to reviews from publications and review sites
Before anybody rushes out and reconfigures their entire marketing strategy based on this study, I want to point out a few things.
Surveys are an iffy source of information for several reasons. There is always the possibility of self-selection bias. That means the responses given by the persons who choose to participate in the survey may differ in significant ways from the population at large.
People are notoriously bad at following instructions. That means some portion of the answers may be the result of error or misunderstanding. For example, question 4 on the survey asks respondents, “What is your favorite genre of fiction?”
Of the 2,951 people responding to this survey item, 2,391 answered that “Romance” was their favorite fiction genre. BUT question 5 of the survey reads: “If you chose romance in the previous question, please state your favorite sub-genre.” The number of people answering this item should have been equal to or smaller than the number who chose romance as their favorite genre. Instead, 2,661 people responded to this item. Obviously, something went wrong there. Either people again selected more than one “favorite” or people who did not select romance as their favorite genre answered this survey item. Or maybe both.
Writing survey items is difficult. For example, the very first item on the survey reads as follows:
I prefer to read (choose as many as apply)
All of the above
The problem with that item is that by allowing respondents to choose “as many as apply,” the whole concept of preference goes out the window. The data become muddled.
It is easy to get excited about a big pile of data, but without context, the data can be misleading. Facebook tops the list for book discovery among the other social media platforms and handily wins as the preferred platform for following an author. HOWEVER, since we don’t know whether the distribution of respondents to the survey corresponds with the distribution of social media users, there may be a Facebook bias in the data. In other words, there could have been a greater population of Facebook users in the study group than in the population at large. Likewise, if FB was used to recruit survey participants, a bias could result.
I did not see any demographical data in the report. I don’t know the male/female/age/income/occupation breakdowns of the participants. That sort of stuff is usually used to determine whether the sample population is representative. Neither did I see anywhere the number of books the respondents purchased annually. Honestly, if the survey is largely comprised of people who only buy a couple of books a year, I don’t know that their answers are all that important.
The upshot of all this is that while such surveys are interesting and may have some applicability, they are not the holy grail. It is easy for people to confuse correlation with causation. That’s how we end up with media strategies that rely on findings that the letter G appears in the titles of a high percentage of bestsellers. People then run out and change the titles of their books to incorporate the letter G. This usually ends in disappointment.
I salute Marie Force for undertaking this survey effort. It may be that her findings help establish a baseline, but validity requires the study results be replicated. So, I admonish against any precipitous changes in marketing strategies based upon this study alone.