> Not all of these fake reviews are one stars – some give five star or other highly rated ratings. The catch with these highly rated reviews is many of them are created to give the false appearance that they were written by Tomlinson to raise his own Goodreads ratings, spoofing his name and photo and sometimes even using his own copyrighted writings.
Wow, that's devious. I wonder if any of the fake product reviews I've seen are obvious fake endorsements placed there by the competition.
This problem isn't one born with the internet. Think about all those "WE BUY HOUSES 4 CASH" signs you see at stop lights. Why can't the city simply look up the phone number on them and convict the business owner for breaking advertisement laws? Because there is no proof he put the sign there. It could be the competition trying to frame him! Thus, the signs are simply thrown out... and he can put new ones out tomorrow.
At the same time, if that sign pops up four weeks in a row and is taken down four times - I should hope the city allocates a bit of labour to actually figure out who is doing it.
And that's the answer that GR doesn't want to hear - clear patterns of abuse are apparent and they need to allocate more manual labour into moderation - automatic moderation can get pretty decent accuracy, but there's always a grey zone where you need some manual review - as much as we shrink that zone I don't think we'll ever make it disappear.
Let's say you buy houses for cash and I put up a sign with your number on it. The police call you and find out that you do, indeed, buy houses for cash. They throw you in jail and I snicker, knowing that by breaking the advertising law myself but with your name I have put you out of business.
While I'm at it, I leave your business card at the scene of a heist.
I was thinking this could be a good way to get lame blogs banned off HN. Just order some upvotes for it from one of those banned upvoting services. No one is going to believe someone else ordered it.
They can remove the voters' voting power instead of banning the website. (This has happened with at least company's blog/employees, that I'm aware of, when I got asked as a contractor to help upvote.)
In Germany such stuff is handled by the "Ordnungsamt" (office of public order, roughly translated). They're not policemen so they don't have the authority to arrest, but they can and will issue fines for littering, graffiti, improper advertising and the likes.
At least graffiti and trash keeps the rents down, plus it looks nicer to the eye than the bullshit uber-clean concrete crap that passes for "luxury flats" these days.
>"Let's say you buy houses for cash and I put up a sign with your number on it. The police call you and find out that you do, indeed, buy houses for cash."
Suppose I were operating such a business with legal advertisements only and the detective asked me "Hey I saw a sign on a telephone pole saying you buy houses for cash, is that right?" why would I answer in the affirmative?
> "No, it's weird that you saw that. I don't post signs on any telephone poles, this is a highly reputable business.
They'd only say that if they're smart. Many of them probably aren't, and their guard will be down if the detective can do a passable "desperate alcoholic" impression over the phone. But regardless, I agree that false negatives are more likely than false positives.
If you're advertising on a telephone pole and a potential customer (or police officer) contacts you, here's how that conversation goes:
"No, it's weird that you saw that. I don't post signs on any telephone poles, this is a highly reputable business. However, as long as you're here, I definitely do buy houses for cash, and it sounds like you're interested in that."
If I didn't in fact advertise on telephone poles, then I would say that wasn't my sign.
If I didn't advertise on telephone poles but somebody else was trying to frame me, and then I proceeded to act as though those signs were my own, then why would I not deserve punishment? If those signs advertised my business and I neglected to disown the signs because I was greedy, I think I'd deserve to be fined by the city. If I admitted on a recorded telephone call with a detective that the signs were mine, even if they weren't, then I've screwed myself with my own greed, which is fitting and just.
It feels like you're skipping through a few steps to create a specific conclusion in order to then refute it?
If a detective merely asks you "do you post signs with your name and number with an offer to buy houses", a lot more steps have to take place before reaching the point where you, personally and individually would see a fine for what is in more cases than not going to be a civil infraction that I would imagine, one can take photos of, go to your municipality and contest and say "those signs are illegal but are not mine, these signs are legal and belong to me".
> If a detective merely asks you "do you post signs with your name and number with an offer to buy houses",
You're missing the point were the detective specifically asks you if you placed signs on telephone poles and got a voice recording of you admitting you did place illegal signs. The real reason this doesn't happen is simply because detectives can't be bothered, not because it's an impossible case to make in court.
The real reason this doesn't happen is simply because detectives can't be bothered
And because it's highly improbable that "yes, those are my signs" over the phone is enough to result in an infraction if they did.
Chances are, you're not even going to get the phone call in the hypothetical you're propping up, even from a clerk's office. If your name and phone number is on it, you'll likely just end up getting it in the mail without even the courtesy of a phone call to ask how your morning is going.
Chances are these scam realestate business do not have "clerks offices." The numbers on them are almost always local numbers and probably go to the personal cellphone of the jackass who hung the sign nine times out of ten. Legitimate realestate businesses usually don't need to scrap the bottom of the barrel like this.
The possibility of a false negative does exist, but the possibility of a false positive seems greatly overstated and I do not believe aversion to false positives motivates the lack of enforcement as was suggested above.
Whoever answers the phone is likely not going to know or care about how the business advertises.
Like seriously, do you expect an employee to go "oh I don't know of any signs we advertise on, so I guess we can't buy your house"? Even if they know for a fact that the company doesn't have any sign-based advertising they're still not going to turn away the customer.
A valid defense would just be that there was at least one legitimate advertisement that they saw. Just because one sign is illegal doesn’t mean they all are.
I always assumed that the companies behind "we buy houses 4 cash" ads did exactly that - bought houses quickly and on cash terms at far below the market value from people who are in some sort of bad situation and would agree to a lousy deal in exchange for money now. Is that not what they do? Do they do something illegal instead?
Well, yeah, but the parent post talks about the police calling the number on the sign and using that to get a conviction, so I assumed that some sort of other crime was being committed, like maybe they steal the caller's bank info or sign them up for some scam service.
The reason the police don't do what the parent says is not because of difficulties with attribution of the act, it's because they care so little about the offense that they're not going to expend even the slightest effort to prosecute it. If they saw someone putting out signs, they might tell them to stop. Maybe.
Similarly, a person can't be held (directly) legally responsible for receiving illicit material via postal service (at least in the United States). Otherwise, you could simply ship drugs to their home address and get them charged with any number of crimes.
However, a person can be legally raided for the purposes of recovering said drugs, even if they have no idea that they possess them after picking the package up. Furthermore, even if the police has advance knowledge of the package and could have intercepted it earlier, they can still allow delivery and follow up with the raid, just so that they can take the recipient into custody. One prominent example:
This reminds me of a related point: Almost always when I find of the "We will buy your car" cards on my car's windshield I wonder what to do with the card. I don't want to litter the streets, at the same time I never asked for the stupid card. To this day, I am still not clear what is -from a morale standpoint- the right thing to do.
Whatever you choose to do with the card, know that they are the ones at fault. Advertisers just swoop in, leave their garbage on people's cars and then make them clean up the mess. Why should people have to go out of their way to clean up after an advertiser?
It is moral to clean up after yourself. Cleaning up after others is a job that demands payment. Cities must tax advertisers so they can employ people to clean up after them.
Advertisers think it's okay to leave unwanted stuff on other people's property and force them sort out the mess. Roommates who did that would be told to knock it off by the people they live with. Why should people accept the same behavior from advertisers who are total strangers?
They are counting on people being decent human beings who do what they're supposed to do without complaint. That's exactly what enables them and lets them get away with their unacceptable behavior. If nobody did that, maybe the situation would become unmanageable and the city would be forced to deal with it.
The right thing to do is to put an end to all advertising. That's the true solution. Nobody's gonna do it because the money speaks much louder than right and wrong.
Main point is to do something to increase the cost of this kind of advertisement or decrease the effectiveness.
Decreasing effectiveness is kinda hard, so for short term putting on a trash can I guess, for the long run littering the street can be better option (can maybe push city to increase fines etc for this kind of advertisement)
I think the answer is obvious: a trash can or recycle bin. It's clear this could turn into a DDoS by them, but thankfully they havent thought like that. I'd prob backfire advertising wise anyway.
Less obvious since I found out that my city's council taxes those cards. When the rain turns the usual dozen of flyers into a plaster that harden later with the sun, it takes a few minutes to clean at best.
Vandalism? So illegal activities are now taxed instead of fined? Not that I find more logical that the council rents my car to those companies as an advertisement medium. But there's zero chance that report would be taken seriously.
I understood your previous comment that you're already in a situation where it's taxed; I was thinking of someway to have it cost the advertising company money even if it only serves to have them put out the flyers on sunny days.
I was thinking more as tool for insurance and/or attempting to force the company to pay for the cost of an exterior detailing of your car since they plastered something to it (with the help of the weather).
In many US areas there are online police reports for minor incidents and the purpose of the report is almost solely so that you have a record for insurance.
It may also help if you call up the company (or publicly shame them with a tweet) to ask for reimbursement for the cost of a car detailer to remove their litter from your car without damage. Having a police report means you could put it all in the hands of your insurance company who have lawyers on staff or use it as part of the negotiation with the company.
Maybe enough police reports about a given company and you could petition the council to revoke that company's ability to flyer any longer?
I know I'm probably dreaming that it would make a difference. The only time I've had an experience with this a friend used twitter and got a public apology from the company along with some monetary compensation around the removal of the ink residue from the windshield.
It seems there's a huge culture difference here. Not trying to criticize the US way, I guess it works for you, congrats! Here I would be laughed at, unfortunately.
People that put the flyers on the windshields are just a step from insolvency, so no use to go against them. The "company" behind them, just a little step up the food chain.
I have heard (but don't have personal experience of etc) that there is a similar tactic for “negative SEO”. Often people try to spam links to their own site to boost PageRank, leading to Google etc picking up on this and ranking websites worse for it, leading to people spamming links to competitors' sites so Google will penalise them.
Situations like this make me think that public educational systems should experiment with some form of "digital literacy" courses / exercises for young children with the goal of humanizing the processes of online communication. Teaching standards for how to treat others (and how to respond to observed and experienced abuses) may provide some reduction in the number of individuals that seem to be finding their ways to toxic online communities. From a lay perspective it really does seem that people who participate in extremely toxic online communities are exhibiting signs of serious personality deformations; since the internet acts as a significant force multiplier on an individual's ability to spread their perspective, and since the problem of policing online speech without creating a locked-down surveillance nightmare seems unlikely to be solved any time soon, perhaps one of the better options would be to arm adolescents with a proper mentality for handling online harassment under the assumption that it is likely to occur.
There is definitely value to gain from Digital Literacy as well as Digital Etiquette, but I'd take your suggestion a step forward and teach more people how to handle the mind games that stem from toxic internet cultures with lessons on Digital Fortitude.
A lot of people who grew up with the internet wised up and learned to tolerate/ignore troll behavior. This mostly comes with age, but we can do a better job teaching young internet users that the racist commenter is just looking for attention and should be ignored, that the 200 messages could be coming from a single anti-social person, and that a slew of 1-star reviews may not be coming from a reputable source. This would also involve warnings on the repercussions of handling a hostile situation the wrong way (by engaging in a troll and showing obvious signs of stress or by blithely trusting a DM who appears to be on your side) and more effective ways to cope.
So long as your lessons include the information that this advice only works when you're dealing with low-level harassment. It fails catastrophically when you're dealing with a mob. It's like the advice to "eat healthy foods" - while it's probably right for a random person, when you share that advice with someone who has cancer, you are an insensitive ignorant dick. So if you share it as "basic approaches to normal life", fine, but when you hear "x hasn't been well", maybe you should avoid responding with this helpful simplistic nonsense.
Even as somebody that's used the Internet frequently for 25+ years, I was guilty of falling for some of the traps outlined in this article. I think something like this being taught in schools would shift the state of conversation on the Web dramatically.
For a really long time I've wanted to start an organization for this explicit purpose: raising awareness of disinformation and exploitative online scams and how to identify and avoid them, and teaching people positive ways to interact online and make their online interactions more constructive.
People my age grew up with the internet and had to learn all of this the hard way, but we could really benefit from purposefully educating other people about these things based on our own experiences.
This doesn't have much to do with "digital". Without the internet, these people might just set homeless people on fire.
It's some sort of cultural phenomenon. FTA:
> As to why they're doing it, well, this has been their entire culture for years, picking random innocent people to cyberbully past the breaking point.
From these low-lives to the highest reaches of government, you see people gleefully, and without shame, engaging in cruelty for entertainment. It's decadent, hollow, (self-)destructive.
What would help? No idea... I'd think a bit of philosophy in school might actually help: Stoicism and the like at least model the concept of thinking about purpose and emotions. The other side is probably social.
I disagree. I think there's absolutely something about the anonymity of the internet that leads to many people (particularly kids, but everyone) being worse versions of themselves. They are free to engage in their worst impulses, both without fear of social retribution, and without humanizing the person on the other end.
I'm sure that many of these trolls have friends and family in the real world who would never expect this kind of behavior.
I think this is a good intention but it would end up with little difference.
My iconic example is that Marin county with some of the highest levels of tertiary education are also the highest in terms of not getting their kids vaccinated.
Also seen in campaigns against drunk driving and campaigns for eating balanced diets. People become aware of the pros and cons (after exposure) but continue unabated in their behavior.
I think drunk driving campaigns are a great success story. We don't see them pushing drunk driving to dramatic new lows each year because we're already close to the limit of what they can accomplish. There are people who drink and drive despite them, but there would be far more people drinking and driving without them.
If you're a pessimist, maybe my pessimism has experienced integer overflow into optimism.
I admit that the DD was not a good example for two reasons:
One, there are very aggressive negative consequences --this has an impact on some people.
Two, we have a better chance of changing behavior if we do it in a coordinated manner with good methods when we start young. So, I may be a bit too pessimistic since I think we can make a difference if we start young but with good persistence practices. Given we indoctrinate kids against this since middle school, I think it's made a difference.
These are good points you raise. I suppose what I'm really thinking is that there needs to be some sort of "interpersonal digital communications normalization process" for young people that functions similarly to how being at school around teachers / outside at recess around peers functions to normalize children's behavior toward others (or at least serves to mitigate the most blatantly anti-social behaviors in many cases). As it stands, many children are introduced to the web and online communications without much (or any) guidance whatsoever as far as I can tell; it seems like the impact of this hands-off approach is quite negative in many cases, with young people lacking the tools to healthily manage their interactions with the web.
> Situations like this make me think that public educational systems should experiment with some form of "digital literacy" courses / exercises for young children with the goal of humanizing the processes of online communication.
The internet has been around for many decades now. The world hasn't ended.
> arm adolescents with a proper mentality for handling online harassment under the assumption that it is likely to occur.
Most teens are already armed with proper mentality. The only teens who aren't armed are those who have been coddled in safe spaces their entire lives. Maybe spending time in some "toxic community" would help toughen them up.
How about we worry about teaching kids the basics and stop wasting time with nonsense? I'm told by the "chicken littles" that schools are a complete mess. You want add more time wasting nonsense to schools? It's always the controlling old people who thinks the younger generation needs their help.
> since the problem of policing online speech without creating a locked-down surveillance nightmare seems unlikely to be solved any time soon
People like you scare me so much. I still don't understand how you ended up in a forum called "hacker news".
There are plenty of communities that mitigate this problem through earned privileges. Real users who are participating in the community are able to do more than someone that just signed up with a throwaway address. Stackoverflow seems like an okay model... recent moderator issues aside.
Also, the ability to whitelist an author or book for extra moderation seems like a no-brainer. After there is evidence of harassment then all user content needs to be approved before it is made public. Enable trusted moderators from the community to help with this if paid moderators cannot keep up.
This seems like it could get so so much worse than it currently is. The target of harassment seems to be taking it well but what happens on a platform like this to someone that isn't as prepared to deal with it?
There are plenty of communities that mitigate this problem through earned privileges. Real users who are participating in the community are able to do more than someone that just signed up with a throwaway address. Stackoverflow seems like an okay model... recent moderator issues aside.
I'm scanning my memory banks and Stackoverflow is the only "earned privilege" community that comes to mind and my experience with it has been uniformly unpleasant, let's say "bordering on toxic". If anything, automatically earned privilege creates competition which makes everything worst and nastier.
In contrast, I moderate a medium sized FB group in a topic that often has trolling. We eliminate it entirely through hand-picked moderators and a zero tolerance statement. There's no competition to be a moderator and there's actually little for the moderators to do since making things clear mostly works. So there's no competition for anything and people spend their time discussing issues instead.
HN seems to be closer to that situation also - with karma hidden, competition is pretty limited. And anonymous posters can make fine contributions here.
> I'm scanning my memory banks and Stackoverflow is the only "earned privilege" community that comes to mind
As far as "positive earned privilege examples", some come to mind:
* HN, where downvoting requires a certain amount of karma (although there's plenty of human moderation too)
* MetaFilter [1], which has a reputation of good content due to their one-time $5 charge for signing up.
* The /r/AskHistorians subreddit, where you only get to answer once you have in-depth knowledge of a specific topic.
You bring up HN, but it also has a Stackoverflow like system. It's much lighter, but there are things you can't do as a new user (I believe you can't "flag" or "vouch" for posts).
I think there's a big difference between moderation according to standards, and false reviews.
It's comparatively easy to determine if any single post is using banned language, is abusive, etc. The single post can then be removed.
False reviews, on the other hand, are virtually impossible to identify individually. People's opinions on a book legitimately differ. There isn't an obvious way to distinguish between a review that's part of a harrassment campaign or paid brigade, versus one that's genuine. It's only in aggregate that something seems to be wrong -- but how do you fix it? How do you select which individual reviews get removed?
Moderation is not really a solution here, because all individual reviews will be approved.
One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.
It is definitely a difficult problem - I'll agree with you there. There are some other good suggestions in the thread on making it easier to flag the false reviews/moderate reviews beyond "community standards"
I like the idea of using a captcha that prompts you to enter a random word from a random chapter in the book.
Another system could just hide reviews that are not verified - and tie into amazon purchases to verify them - I don't know why Amazon would not lean on the fact that they own Goodreads to do this... Make all the reviews visible if the user prompts to see the unverified ones, but as the default just shows the reviews for people that bought the book through Amazon.
Yes, I agree there a few things that could be done to improve, but they all basically involve giving semi-subjective 'weights' as to the reliability of individual reviewers.
E.g. more likely to be genuine if purchased, if not prepublication (except some people really do receive and review books in advance), if has many reviews, if reviews follow common statistical patterns both per-author and per-book, and so on.
The trouble with all of this is just that it's really, really hard to get right. There's a tremendous amount of 'tuning' involved.
It's probably not possible, but it really would be great if someone could come up with some general elegant theory to solve particularly the 'does this reviewer seem statistically trustworthy', in a way that effectively identifies brigading and harrassment, while still allowing for genuine 'oddballs' whose reviews and ratings go against the crowd.
I swear I'm not just meme-ing for the sake of it, but this has always seemed like fundamentally a decentralized trust problem, and one that can potentially be solved by some form of social blockchain.
Basically, instead of an economic currency unit being mined, the value being protected is instead some form of reputational trust-token; 30 seconds of Googling leads to articles like this: https://www.forbes.com/sites/shermanlee/2018/08/13/a-decentr...
Thinking about things this way essentially boils the fundamental problem into what IMO is a pretty "general elegant theory", which is simply to construct a properly balanced incentive structure, which asymmetrically disincentivizes "bad" behavior while encouraging "good" behavior, in much the same way that Bitcoin's core ledger-validation/mining abstraction rewards miners for securing the network while also discouraging prohibitively expensive attack scenarios.
I'm not saying it's easy or obvious, but I think this is exactly the sort of decentralized trust problem that blockchains are well-suited for.
In this specific case, the book isn't even out for advance readers. The literal least goodreads could do is turn off reviews for the book until the author tells them otherwise.
> One of the fake reviews was by someone who had passed away with a picture obtained from their obituary. A moderator who knows an author or book has been flagged could spend a minute to find this out.
This is an "obvious in hindsight" example but do we expect mods to google search every single name/photo of every review or comment, and then determine if it's legit or not?
This quickly becomes an escalation game where the effort to identify fakes just gets more and more tedious, and it's been repeatedly proven that trolls have way more time and energy to spend on this game than volunteer moderators, and will simply out-grind them to keep up their harassment.
Any solution that involves putting in more human effort than the trolls is likely to fail.
Goodreads has been on life support for years. The community itself has complained about a lack of innovation and updates, and this is just another consequence of a neglected operation.
The real problem here is that Amazon doesn't want to put any money into it.
Your solution makes a lot of sense, but would require effort, and I doubt anyone involved in running GR cares enough to do it.
As an environment for people who care about books, perhaps.
But they completely own any Google search for an author or books, well beyond any dedicated fan forum, publisher, Wikipedia, or the author's own site. They're a cancer on the Internet and search engines.
Instagram is another site with this problem. Someone supplied my email address when creating an account in late 2018. I only know this because I started receiving emails to confirm my email address. Then I started to get a bunch of emails telling me about new posts from other users. Gmail account history only showed logins from my browser and IP address, so I don't believe my account was compromised.
I finally got tired of the emails, so I told the site I forgot the password, changed the password to something long and random, then deleted all the content. I remember having to do something unusual to delete the content, like spoof my useragent to pretend to be a mobile device, or something like that. I've never used Instagram before or since, but it really annoyed me that I have a problem due to their lax controls.
Instagram never should have activated the account and allowed any activity until the email address was verified, which I never did. If a big site like Instagram can't get this right, it doesn't surprise me that a small site like Goodreads can't either.
Instagram never should have activated the account and allowed any activity until the email address was verified
Essentially zero services on the internet operate this way because it increases the friction to signing up & getting started with the service.
Some services do somewhat better by including a "this wasn't me" link in the email verification email to make it as easy as possible to remove your email from the account in question. IMHO, this is a fine way to handle the problem.
It's not just the sites you are asking to deal with that friction, but also every single individual user who ever signs up for anything on the internet.
Just did this last week with an Instagram account tied to a free email I hardly ever use.
Epic Games also doesn't verify the email address, because my email account "had" an account there too. Worse, it had a credit card saved and appeared to be ready to allow me to use the card after I did the "forgot password" and gained access to the account.
In my observation of the edit patterns being used by moderators, titles are frequently edited when they contain emotionally charged words ("spoof" "trolls" "harass" in this one) to create bland, boring titles no matter what the source article is titled (not violating guidelines).
People who read the article will be exposed to this "bias" anyway. Leaving the title as-is would then be more informative.
Here, the change is okay. Other times, the change rendered the title nonsensical: "XYZ is now closed source" -> "XYZ Changelog".
I would default to a much stronger preference for the original article-the author/editor probably put more thought into the title, and destroying their creative work shouldn't be routine.
And what's with the bias-paranoia? People, including journalists, are allowed to have opinions and emotions. They do not have to equivocate: "The sanitary situation in the camp is becoming dangerous" does not require "...but someone on YouTube believes germ theory is a hoax, so who is to say if sleeping in feces isn't just a good way to stay warm".
> I would default to a much stronger preference for the original article-the author/editor probably put more thought into the title, and destroying their creative work shouldn't be routine.
And what's routine? How often are titles changed? How often is this viewed as beneficial to the majority, and how often is it viewed as detrimental?
If you're looking for a set of rules that result in the best solution every time, you're going to be disappointed. Whatever solution you come up with, even if it perfectly matched your sensibilities, would be misapplied in some instances because people are responsible for applying it.
The real question is how good of a job is being done on editorializing the submission titles already, and going off what we can remember of past instances is IMO worse than useless, it's likely rife with multiple forms of cognitive bias.
There's probably a good discussion to be had as to how well the title editorializing is here, but it rarely happens in response to someone posting about a specific title. If I had to guess, the discussion resulting from the post in the recent past about the site that tracked all HN submission name changes (which I never got a chance to read closely) might have some good discussions (as that data set is probably a good base to explore this topic usefully).
> Other times, the change rendered the title nonsensical: "XYZ is now closed source" -> "XYZ Changelog".
My understanding of this policy is that the editorializing should be done in the comments rather than the submission of the title. -- A submission (and presumably number of upvotes/comments from others) already indicates that the article has some importance, and opinions about it have a 'level playing field' in the comments.
I think this also supports that the comment section is the place for interesting/insightful discussion.
Well it's worth noting that the original title is an accusation. The guy who is the target of the attack thinks this is lax security, but do we know that? We haven't heard what the people from Goodreads are doing to address this.
Right, another online group of people conducting orchestrated campaigns of harassment. They seem to come together with highly tenuous pretexts (eg some random radio show that isn't even on the air any more) yet the members are highly motivated and resourceful. It is a very strange phenomenon, like a cult worshipping nastiness as the supreme expression of existence. These groups seem particularly against art and creativity, their targets are often novelists, artists and performers. I suppose if you are deeply unhappy and disempowered then any form of art must seem like an affront. Only if all creative acts are stupid may you feel good about your own life chewing cud on a forum somewhere.
The more online everyone and everything becomes the more prevalent these generalised and distributed lynch mobs are likely to be. The also function with impunity (eg kiwi farms).
Sadly, any Goodreads competitor will need to miraculously gain the network effect; everyone you know is on Goodreads, so it'll just be you and whoever you can convince to move to a new platform.
As for the downsides for Goodreads, this blatant lack of moderation is troublesome. I also dislike that Kindle / Amazon are the only visible links to purchase books by default. Amazon already dominates the ebook/audiobook market, so I also simply dislike Goodreads due to their acquisition by Amazon.
As I don't care much about the social network aspect, I would use a competitor that offers better functionality.
Unfortunately library thing's homepage makes it seem like it's not updated for a few years, which makes me wonder if it's worth migrating for a platform that might not be actively maintained anymore.
They're actively maintained and I get newsletters and so on from them.
They thing is, on another level: they're done. They offer excellent cataloguing and the ability to share your reviews and so on and so forth. What more do they need to add? A visual refresh every few years? At this point I'd rather they spend the money their customers pay them on keeping things running, not adding crap no-one cares about, surveillance capitalism anti-features, or whatever.
When Goodreads started out, Facebook's API was much more open. After you authorized Goodreads they'd have all of your Facebook friends and would proceed to spam them asking them to signup for Goodreads. The openness of Facebook's API was probably very critical to their early growth.
These days though, the FB API has been locked down and basically can't be used for growing your userbase anymore. Any new startups in this space won't have the social graph advantage that Goodreads did. Sad.
It's quite good, and I really like their automatically-generated recommendations BUT there is a small fee, and I have a knee-jerk reaction to anything on the internet that asks me for money.
You get up to 200 books at LibraryThing for free, then it's either a $10/yr or $25/once for life. If you're using the service to that degree (>200 books) then you know the value of the service and should be willing to pay a little in return. TANSTAAFL
I wouldn't think twice about buying something nice from a store for $25 dollars.
Once upon a time I bought something online for about $10 from what was a legitimate business with an address in San Francisco. About 14 months later they claimed I subscribed to some service and started making huge charges to my card ($150/week). Getting them to stop and getting my money back was an enormously stressful and difficult process.
That's why I have a knee-jerk reaction - not a cold, logical reaction - to online purchases from companies I haven't used before, and I'm not the only person like that.
Understood, perhaps my experience will help. I have had an account since 2014 according to their site, and I think I purchased my $25 lifetime in Jan or Feb 2014 (fuzzy memory - might have been cheaper back then? sale?) as I had more than 200 books I wanted to import from Goodreads, that I do remember. (I'm a very lightweight user by book nerd standards, only about 450 books as of today)
I've had no negative fallout from purchasing a subscription, and they have very good privacy controls for those who don't want the social aspect (so you can basically turn off other user interactions in many ways). It's there when I need it, doesn't seem to spam me and I probably paid with Paypal (I use Paypal as a way to not give 3rd parties my CC info, a proxy if you will) like most things.
It may look old school and appear at a glance like it's not maintained, but it's updated and run actively, there are a bajillion people using LibraryThing. Logging in to look at my account, there's a link on the right for the latest news posted today about "The January ER Batch is up! We've got 2,960 copies of 89 titles this month." (early reviewer books) https://www.librarything.com/er/list
> Getting them to stop and getting my money back was an enormously stressful and difficult process.
I've had incorrect subscription charges on one of my cards, and while it was minorly annoying (a search on the card website, fill out a form, repeat one more time the second month it happened) I can't imagine a scenario where it would be "enormously stressful and difficult".
Your card issuer is required to respond when you report fraudulent charges, and if they don't you need a new bank.
Aside from the spoofing issues what are the main drawbacks and benefits of GoodReads from your perspective - what's the worst of times, what's the best of times?
The ratings are useless - any author with a devoted following gets endless 5 star reviews and YA books are stuffed with 5 star reviews. The website is clunky beyond belief and never seems to get improvements. The recommendation engine is terrible. The search facilities are weak and inconsistent. It could be so much better, but it never improves.
One thing I've noticed with the recommendation engine is that it doesn't recognize when its data set is too small. There are quite a few books where I look at the "Readers also enjoyed" list and can identify it as simply a list of other books I and one of my siblings read recently.
It's a book that my brother and I recently re-read, and the recommendations are...four other books that I and my brother read recently. Granted, the majority are the same genre, but two aren't. They're just...books we both recently gave 4 stars.
You start by allowing people to rate the ratings. For instance, flag individual titles or tags 'not interested' such as on Steam. Even just a way to hide individual titles would make a system like Netflix much better to use, and bring product customers are more likely to pay for to the foreground. And then you can feed the data to the recommendation engines, which might start to learn about what demographics are using the system rather than relying on assumptions. My personal belief (as someone with zero actual experience here), is that dislikes and disinterest would be much better for generating recommendations over likes and interest. 'likes' just gives you what is popular in your familiar genres. 'dislikes' expresses your tastes.
Yes - it's a difficult problem. It must be possible to build a better statistical model of each rater, in order to weight their opinions. A first step would be to normalise the rating distribution of each person (e.g. by average and standard deviation). I wouldn't use a rating for a particular book, unless the rater had some minimum number of ratings, or a book had very few ratings.
Their Facebook integration used to be downright deceitful. It was very easy to spam all of your friends that way. After they did that I changed my Goodreads account's name to "Chris 'Goodreads spammed all my friends'" and then I discovered that having an unusually long name breaks their design in multiple places. [1]
Besides the dark growth patterns, overall Goodreads feels like a clunky product suffering from feature bloat and poor usability. Unfortunately, because of the network effects and the Kindle integration, it's very hard for a competitor to get off the ground.
(Facebook's API has been locked down in the meantime and Goodreads now only has access to your FB friends that also linked Goodreads...which they use to send a friend requests to each one)
No improvements on old features or adding new features, basically no updates at all for the past few years. Recommendations and rating system are crap. Usability around core features like bookshelves are terrible. The awards are just a popularity contest.
I want to second this question - this is a relatively easy-to-fix issue on GR's side to have more moderation tools and powers. But otherwise, are there any other substantial complaints?
their search and filtering mechanisms are abysmal. they have essentially taken tons of valuable user-supplied data and locked it up behind a useless interface. some simple examples - i cannot find humorous fantasy books by searching for books tagged both "fantasy" and "humour". i cannot do a search that returns hundreds of results and then sort them so that the best ones go to the top - and i'm not even talking about some magic relevance algorithm, just sorting by explicit data. i cannot even organise my own read, unread and to-read books easily. all in all it's a usability disaster and totally wastes the labour people have put into building up the database.
Given the rulings around LinkedIn and public data, is it feasible for a competitor to scrape some of that information? Would they get sued to oblivion if they tried?
Not needing to start from scratch would make building a competing service a lot easier. Sites like Stackoverflow have reasonably open licenses on user data, so you could theoretically use that data and build an alternative if the site fell apart. I'm guessing that's not the case for Goodreads though, at least for things like reviews.
But even pulling in basic category information would be easier than starting from scratch.
i would love to see that happen! i have occasionally wondered what it would take to reboot a goodreads clone from scratch but importing the data is definitely a plan with a higher chance of actually replacing the site.
My family was recently shocked to discover that adding a book as read and rating it doesn't automatically set the read date. I don't think it needs to do so, but these people are computer-literate and have entered > 1,000 books a piece. And they didn't know how it works. That suggests poor UX.
As someone who DID know this and and always manually add Date Read...here's how to do it on desktop: hover over the shelf dropdown until a little popup appears above, move up to that popup without it vanishing, and click "Write a review", which secretly means "write a review or enter date read."
2. It's slow.
Loading just the html of the front page is > 3 seconds. Loading My Books is > 5 seconds. Again, this is JUST the html, excluding no js, css, and images.
3. Nav is bad
This combines poorly with the site speed. One thing I do most frequently is to look at my most recently read books:
- Go to goodreads.com (3+ seconds)
- Click "My Books" (5+ seconds)
- Click "Read" (It defaults to books read and on your to-read list all mixed together.)
- Sort the list by Date Finished (5+ seconds)
- Re-sort the list by Date Finished because it did ascending the first time. (5+ seconds)
(Obviously, I could just bookmark that page with the desired params, but if I'm bookmarking to avoid having to use your site navigation, that's a UX issue.)
4. The recommendation engine is bad.
Various people have mentioned this. I will grant that recommendations are hard. But basically, don't use the recommendations. Use the lists manually built by users. (But note that on most lists the top spots will be pointless recommendations that you read Harry Potter. Gee, never heard of THAT book before, thanks!)
5. Lists aren't super-accessible
As I mentioned, the lists are much more useful than the recommendation engine. They're under Browse->Lists.
There's a search at the top of every single page. It searches books and authors. Not lists.
If you're in the lists section, it...still won't search lists. There's a tiny search box on the lists page for this.
6. Search Breaks Middle-Click
When I search a book, it populates the results without me having to click through to the results page. If I want to open those results in a new tab, though...nope! It'll just re-open the current page in a new tab.
7. Their export tool doesn't work right.
This is a minor quibble--it's VERY nice that they let you export your data at all--but I recently discovered that a lot (most?) of rows in the export are missing the Date Read field even if you entered them. Not all though. I don't know what the pattern is, but it's annoying.
Basically, I think Goodreads has approximately one engineer, whose job is to do some tweaks for the marketing team as needed (they renamed Giveaways recently-ish). There's clearly no designer, as UX has been essentially touched in the 12 years since I joined.
It doesn't need a sweeping redesign, but there are obvious UX tweaks they could have made at any point in the past decade and instead didn't. And some performance work, please!
Goodreads is a social network framing itself as a book review website.
You could argue that is necessary to get engagement from reviewers, but in trying to be both, goodreads doesn't do either very well. Throw in the fact that they haven't improved much since Amazon acquired them, and they've become a sort of static site targeted at acquisitions for ebooks.
I would argue it's an Amazon referral engine framing itself as a book review site. They have optimized for purchases and not for reader happiness / quality.
A competing site that is focused on surfacing books that the reader likes, with links to more than just a single purchase point would likely gain enough traction to be useful.
I imagine that scraping a user's GoodReads profile every now and again with authorization would also allow the user to update status in their Kindle / eReader while still populating another site, which would be interesting.
Goodreads has a ginormous moat for any competitor trying to get readers who primarily use ereaders, thanks to the deep integration with the Kindle. I keep wanting to try my hand at making a book social network but stop before I've begun because I don't know how to get over that hump.
If someone sets out to build a Goodreads competitor, what's everyone's suggestion on how to aggregate book / ISBN / poster reference data? Google Books API comes to mind but I am wondering if there is a self-hosted solution.
Pre-release reviews is a problem on more than just books too. I wish there were more systems to verify that you've at least bought the book, movie, or the product in general before you can review it.
A number of copy-protection systems for mid-80's to mid-90's computer games asked that you opened the printed manual that came with the game and go to—e.g.—page 32 and type in the first word of the third paragraph. Doesn't seem that hard to automate doing that for ebooks, these days. After all, Goodreads is owned by Amazon so they have the corpus. Just use chapters instead of pages; that will work across all formats.
I'm not sure that would help with organized brigading campaigns like this. In the Internet age, one person with the book could supply answers to these questions for hundreds of people posting fake reviews with only a nominal investment of time.
If each prompt were unique (and given the number of pages and words in a book, they could be) then no - it wouldn't be a nominal investment of time to answer all of them.
Goodreads is owned by Amazon, who is incentivized by the pre-release purchases that these reviews drive. They're not going to put a lock on pre-release reviews (nor should they) to solve a trolling/bad actor issue.
Why shouldn't they? I think it's ridiculous to be able to review something which you have not experienced and I don't see a good reason why they should be allowed. The lockout should be until at least one person (ideally more) have experienced the fully released product.
Authors and publishers ship pre-release advance copies to popular reviewers, as part a part of the promotion leading up to launch, like movies do with prescreenings. Goodreads is notorious for having these types of reviewers, in fact, it's one of the carrots that serial reviewers go after.
It's amazing how I see this problem everywhere, not only with books. On trakt.tv, all movies and shows that are for from being released already have ratings...
I'm not familiar with the backend of Goodreads but perhaps one tiny step forward would be for the book page to not open for ratings/comments prior to review copy distribution. And because review copy users already have to on-board (in some way) to get those copies, Goodreads could provide an integration point for authors to plug that reader list in and only let those folks respond initially, with restrictions lifting at launch.
Anyway, I wish them best of luck. This problem is widespread, see: Amazon, Metacritic, Rotten Tomatoes, et al.
Edit: "What's the second word of chapter 4" would work better but that's assuming harassers can't source an ebook and share it around... it'd prevent pre-release spam like this but so would disabling reviews until the book is actually released
doesn't work because these trolls would just seed pirated copies of the book to their group. Even if you force people to use a credit card to register, they'd just buy prepaid cards for nothing, just to keep trolling. These people are cruel and in large numbers.
The first thing Goodreads asks you to do when you create a new account is to list and rate a bunch of books you've read and either liked or hated. This is important, because they (ostensibly) want to recommend other books to you.
When I first created an account, I quickly rated maybe 50 books on a 1-5 scale I had read over the years. I didn't have most of them handy.
In Neal Stephenson's "Fall, or Dodge in Hell", a woman becomes a victim of organized online harassment. The harassers keep repeating the same accusation. She has a rich tech friend who invents "Organized Proxies for Execration" or APEs that overwhelm the internet with huge volumes of contradictory messages. As the inventor says: "-why, even the most credulous user will be inoculated with so many differing, and in many cases contradictory, characterizations as to raise doubts in their minds as to the veracity of any one characterization, and hence the reliability of the Miasma as a whole." Here "Miasma" is Stephenson's word for the cesspool that is social networks. The strategy works more or less for the victim, but (spoiler) has unintended consequences.
I read with some surprise his claims that Amazon wouldn't take down a fake review (which posted fake anti-Semitic quotes, even when the publisher sent them the pages the quotes were supposed to be from). So I checked Amazon myself, and sure enough there is the number one review[1] claiming this tome continues the unfortunate slide of Patrick S. Tomlinson into an alt-right provocateur with blatant anti-Semitic tropes and characters littered throughout the book.
Ironically, most of the other one star reviews claim the book is full of "thinly-veiled, progressive tropes"
I can't imagine how frustrating this is for business owners. There are a bunch of obviously fake reviews by people in India on the local GNC branch. If you just look at the star rating, you'll think "Wow, that place must suck!". Then you read the reviews and it's from people who have never set foot in the United States never mind GNC. Reviews are basically useless.
This definitely seems to cross the line from troll to criminal:
>Many of the spoofed accounts use the identities of Tomlinson’s friends and peers in the author community, creating the illusion that people he knows are giving one-star reviews and saying bad things about him. Dozens of authors have been spoofed in this manner, including the entire board of directors of the Science Fiction and Fantasy Writers of America.
>Not all of these fake reviews are one stars – some give five star or other highly rated ratings. The catch with these highly rated reviews is many of them are created to give the false appearance that they were written by Tomlinson to raise his own Goodreads ratings, spoofing his name and photo and sometimes even using his own copyrighted writings. These spoofed reviews often also show Tomlinson falsely saying things which would hurt his own reputation.
>Gareth L. Powell and Beth Cato were among the authors spoofed, with their photos and names used to create fake accounts to attack Tomlinson’s books.
I think calling these "trolls" doesn't go far enough. I am sure this crosses criminal lines, or at the very least probably several laws.
The victims should sue or the government should pursue actions that rise to this level. It's not just "trolling" to do this, in my opinion.
Fake reviews on Amazon, vote bombing on youtube, fake upvotes on reddit, fake likes on facebook. I'd even extend this tangentially to cyber squatting DNS records, email spam, and robo callers.
It's a matter of trust. We think we can just make a big polling station somewhere and get the communities opinion on something, one person one vote. "Here's the big central aggregate, take the law of averages and you have a good sense of how good something is". But on the internet the Sybil attack reigns supreme. This assumption doesn't hold.
Whenever I read articles on problems like these the topic of "Why it's happening and how to fix it" invariable drifts to "We need better moderation". I never think that. I think "Stop trusting Sybil". Not even THAT! Stop asking me to trust random people I don't trust!
What if instead of having an single aggregate review, we "web of trust" it instead? Here's the system as I imagine it, and it's not fully thought out at this point, please chime in with criticisms if you have them.
The jist: Every user has their own list of ratings for books in the system, and a list of people they "trust". The rated list of books you see the average rating of the people you trust, recursing out through the web of trust. The ratings are calculated live, on demand. With the only thing held in constant being each individuals personal ratings.
I'm leaving a lot undefined right now, but an example of how I imagine such a system to work:
Lets say you know Harry Potter and the Chamber of Secrets is the perfect book. You go to the books page and review the ratings. Your rate is obviously 5. You see lots of other people in your list who have rated it 5 (naturally). But you also see that there are some 3's in there. You can click on the 3's and see how they reached you through the web of trust. It turns out a bunch of 3's are coming from your someone named Bob, Bob trusts Alice, and Alice has trusted a bunch of people from some kind of Harry Potter hating cabal of philistines. You could blacklist the cabal, but there's too many of them, so instead you blacklist Alice. All the 3's are gone, and so are all the other reviews from the cabal. Your list of reviews now more accurately reflects views that you would trust.
In this way, your effort to moderate away the opinions of people you don't trust has done double duty, it has cleared views you disagree with and do not trust, and it has also done so for the people who trust you as well.
It is decentralized moderation.
There's some challenges here, for example, how to bootstrap the system for new users? What is the best way to calculate the average reviews in a timely manner? How much weight should be applied to a friend of a friend of a friend? What kind of feedback could we give to users to incentivize them against trusting people like the philistine cabal? etc.
I have some thoughts on this, I'll spare you.
Like any decentralized system, it's more work, it's more complex, and it has some surprising and ugly edge cases. I also don't think I'm some genius for coming up for this, it's just an application of Web of Trust. But I have not seen a system such as this in practice. Nor do I ever seem to see people talk about it when the topic of moderation and spam come up. If you know of any such case studies, let me know!
Generally the problem with systems like this is that you need a critical mass of users to rate books for a site to have any value. If the only ratings you see are from people you trust, this becomes a lot worse. Once you add UI friction, adoption drops even more. Generally because of network effects, sites that use dark patterns and prioritize engagement over trustworthiness will tend to thrive. Just look at the history of social media.
Some people are trying to battle against negative fake reviews by posting positive fake reviews. It's not clear to me that they are as morally superior as they evidently believe.
Edit: The solution is to fix the voting system, not to abuse it further because you believe you are virtuous.
That's why I used relative language such as "as morally superior as they evidently believe"
Edit: If someone is caught posting fake positive reviews to boost a rating[1], that doesn't make it okay to post fake negative reviews 'to fight the good fight'.
[1] As opposed to impugning someone's character via impersonation, as appears to be happening here
It's not fake praise vs harassment, it's fake praise vs fake criticism.
In any case this will make for an excellent marketing campaign. I don't even know what the book is about (I assume something politically charged because of this campaign) but now I know the book!
I do not believe that the author has faked their own fake criticism in this case, but this marketing tactic has been used successfully before by others.
Excerpt from the article: "Jason: I saw one troll spoofing your name and picture post an extremely nasty, long-winded comment on Goodreads, discussing intimate details of your family life and making it sound like you want to kill yourself. None of this could be considered a review of your book by any conceivable means, yet Goodreads still hasn't removed it. What's going on here?"
I'm really curious how anybody thinks this isn't harassment.
How do the people posting positive fake reviews do that? They're responding to fake reviews in the only way they can, by offsetting them. Yes, there's a better solution that Goodreads can implement, but only Goodreads can implement it; not any of the commenters you criticize here.
Since they can't fix the voting system, and absent any motivation from Goodreads to fix it, is it really bad for them to respond the best way they can using the only tools they have available? After all, if the voting system is widely abused by everyone to the point where it becomes a dumpster fire, maybe that will force Goodreads to care.
I did not claim they were bad. From their own words, some of them evidently believe they are not just morally superior to the people posting fake negative reviews, but vastly morally superior. It is not obvious that this is the case.
Let me rephrase that -- is there a more moral action they could take?
If burning the review section to the ground with garbage content forces Goodreads to take notice, I could see someone arguing that it is a morally productive, good action to take -- not just neutral, but in fact morally superior to any of the complaining we're doing on HN, since Goodreads is 100% not going to care about anything we write here.
I'm not certain I agree with that perspective, but I wouldn't dismiss it out of hand. I'm somewhat cautious about making a strong claim that offsetting fake reviews isn't a morally desirable action.
> Let me rephrase that -- is there a more moral action they could take?
This depends on your basic values, but one could definitely argue that doing nothing is morally superior to abusing the voting system. Only sometimes do two wrongs make a right, and I'm not convinced this is one of those times.
Ok so lets say they do nothing, the business receiving fake negative reviews closes as a result and people lose their jobs. Was that the morally superior choice?
Cherry picking unlikely hypothetical scenarios (this is GR, not yelp) is a bad strategy for making a binary determination of the morality of a choice.
When you ask "Was that the morally superior choice", what set of options are you considering? In my prior statement, I simply claimed that it can be argued that doing nothing is morally superior to abusing the voting system.
> If burning the review section to the ground with garbage content forces Goodreads to take notice, I could see someone arguing that it is a morally productive, good action to take -- not just neutral, but in fact morally superior to
People who post fake negative reviews also use this argument, combined with allegations of moral inferiority of their target.
Wow, that's devious. I wonder if any of the fake product reviews I've seen are obvious fake endorsements placed there by the competition.