At the end of Part I, I promised to talk about some results I’d found relating to the influence of reviews on sales. Well here it is, and it’s a beauty.
Put into layman’s terms, the researchers were looking for ways to reliably predict :
a) the types of reviews that would influence customers to buy,
b) and the types of reviews that would help merchants to sell.
I won’t pretend to understand the methodology, or the statistics, however I think it’s important to look at what the researchers were studying, and where. The ‘what’ included three categories of popular consumer goods :
1. Audio and video players (144 products),
2. Digital cameras (109 products), and
3. DVDs (158 products).
At first glance, these products don’t look terribly relevant to authors. However if you look more closely at number 3 [DVDs] you will realise that reviews of DVDs are exactly like reviews of books. Both describe the users’ personal experience of the content. As such, the results have direct relevance to us as authors/marketers.
Now for the interesting part, the ‘where’. The data, which was collected between March 2005 and May 2006, all came from Amazon! I told you this was good.
In terms of the results, I’ve cherry-picked the bits that would be of interest to authors. They include the following :
1. the perceived reliability of reviewers affected how their reviews were received by customers. Indicators of reliability included information such as ‘real name’, geographic location, and the badges provided by Amazon. These badges include the reviewer’s ranking – e.g. Top 10 Reviewer, etc. In other words, customers did not rate anonymous reviews as highly as those from ‘real people’.
2. objective reviews worked better for functional products – such as digital cameras etc – while more subjective reviews worked for DVDs, where readers wanted to know what other people thought of the content.
To me, however, the truly interesting part of the study is found in this sentence from the conclusion:
“Based on our findings, we can identify quickly reviews that
are expected to be helpful to the users, and display them
first, improving significantly the usefulness of the reviewing
mechanism to the users of the electronic marketplace.”
As soon as I read ‘…and display them first…’, I had an ‘ah hah!’ moment. You see, once I started looking at Amazon reviews more closely, I noticed that the order in which reviews are displayed is determined not by star rating or date, but by how many customers found the review ‘helpful’.
The screenshot shows the reviewer rating feature that appears beneath each published review on Amazon. A perfect score would be something like 3/3 – i.e. 3 customers found the review helpful and none found it unhelpful. A 3 out of 4 would rate lower because I response was negative.
Now, although no one knows for certain how the Amazon algorithms work, I suspect Amazon may use the methods developed in the study to track the influence of reviews against sales, and display them accordingly. In other words, the reviews you are most likely to read will be the ones most likely to make you buy.
Taken all together, the results do seem to validate Paul Drakar’s idea that higher ranked reviewers will be more influential.
In the final part of this series I’ll be looking at Drakar’s strategy for enlisting the help of top ranked Amazon reviewers, and asking the question ‘Do you have the chutzpah to try?’