Hacker News new | past | comments | ask | show | jobs | submit login

In light of that recent article about computer-generated prose being in many cases indistinguishable from human-written, I'd say those fake reviews are likely written by a computer. Notice that they're mostly of the form "I didn't think I'd be interested in [TOPIC] but [TITLE] was really interesting and really helped me learn about [TOPIC]."



That's an interesting thought. I'm pretty sure I could write up a nice grammar to chunk out "interesting" reviews of books. Instead of whining about my book getting buried by fake reviews, maybe I should chuck it and cache in on said reviews!


There should be very obvious ways to identify "review bots" - especially given that reviews are tied to an amazon accounts purchase and browsing history.

That they don't have an efficient algorithm for this sounds more like they don't really care and never bothered with it.


I'd say it's more likely that this is a complex problem, and will take a bit of time and computing power to work through...

Amazon has hundreds of thousands of products with tens of millions of reviews... correlating that with log history for each review will take a lot of time... not just to run, but to write any automated process, and work through resolving it.

It seems to me that Amazon seems to be pretty responsive when bot reviews are pointed out, and that may be, or at least have been a more effective strategy... But looking at an article a few days ago regarding twitter bot nets, and even seeing them try to draw me in... it's a very large problem all around.

Bad people will do bad things... as will misguided people. The bigger issue is the false positives... we've all read the horror stories of when a legitimate domain gets screwed by (insert popular domain registrar here) because of incorrect reportiong/reaction... or when a business' google apps is offline, and nobody can be reached at google... it happens.

In the case mentioned in TFA... it's probably prudent to ban the publisher in question. In others, the case may well be different.


> I'd say it's more likely that this is a complex problem, and will take a bit of time and computing power to work through...

It is a complex problem - but amazon has a serious advantage over other sites that have to deal with such issues (i.e. Twitter) - in that they have significantly more information on each user. I don't think Amazon is short on computing power either.

Taking into account order and browsing history, product review trends, linguistic similarities in review posts etc. They should be able to get very low error rates in identification.

Further unlike something like twitter feeds, it's quite possibly to silently de-prioritize abusive reviewers and associated products. Really, I'm quite surprised at how bad of a job they are doing - most of these cases are so blatant and obvious they should not require an author and a live representative to resolve.


If you can't beat them, join them! Of course your comment was meant sarcastically, but I wonder if authors(of genuine books not the spammy ones) might turn to posting fake reviews and justify it by saying that they have no other choice.

We can see this happening in the case of SEO with many white hat sites employing black/gray hat techniques simply to maintain their current positions in Google.


Are you really suggesting the best solution is to make the reviews more broken and useless?


Good point. If a small number of parties are selling such a bot service, it may be easier for Amazon to detect and block them at origin.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: