Hacker News new | past | comments | ask | show | jobs | submit login
Hacker News Needs Honeypots (nashcoding.com)
450 points by tansey on Oct 28, 2011 | hide | past | favorite | 150 comments



"Nevertheless, I do believe that we are seeing a continuing trend downward in overall article quality on the front page."

I agree comment quality has decreased, but I'm not so sure about the frontpage. I created http://news.ycombinator.com/classic so I could detect frontpage decline. It shows what the frontpage would look like if we only counted votes of users who joined HN in the first year. Usually it looks the same as the frontpage, but with a time lag because there are fewer voters.


But that seems like a poor heuristic in general. For instance, maybe the early users are not necessarily great at filtering articles, they were just good at not submitting off-topic ones. A honeypot approach is a much more flexible and robust way to enforce guidelines without visibly punishing users.


Yeah, you may be right. I might try this. Does anyone have any opinions about whether this is a good idea, and if so how best to do it?


Something about this bothers me. Can't put my finger on it.

I think the assumption is that there are articles so bad that it should be obvious that they are off-topic. In the past, I've submitted things on the border. I imagine many others have too. I've always counted on the groupthink to correct any errors in submission I might make. This seems to assume that there are hard guidelines. Watching the board over the years, I'm not sure that assumption is accurate.

Put differently, if you had a cache of really bad articles, shouldn't we see them? That way we'd know not to submit. But if you already know they're bad, then what's the point of voting or flagging?

Perhaps I'm just mentally adrift here. Honeypots make sense to me when we are talking about boolean things: a website visitor is either harmful or not. An email sender is either a spammer or not. But I'm not sure at all that this concept applies to something like an essay. Seems like if it would, you could just use the flagging behavior mentioned to rank the articles and dump everybody else's votes. Right? This is like verifying the voting behavior by setting up some completely different system to rank quality detectors. But if you could rank quality detectors, why keep the old system? And if not, how would you separate which parts of which system are useful and which are not?


I think it is more and more becoming clear that google's approach is the best way: your ideal first page has to be personalized (when you are looged in, Google personalizes your results with location, previous searches, +1s, and even who you follow on twitter).

Trying to be everything to everybody means there will be people left with sub-optimal results.


This becomes suboptimal on a news site. I don't want to just see content that I know and is interesting to me, I want to read things I would have never have a chance of coming across otherwise. Of course, it is also impossible to read everything. This is why the voting system works so well. If many users find an article interesting, chances are that it will be interesting to everyone. This dynamic is increased substantially with a specific group like Hacker News. This is why I don't like personalised pages, they are only really feasible for things like google+ and certain types of searches.


For those downvoting: Please share your thoughts. If you disagree, I'd love to know why.


It's not you; it's that pg has said many times (and I agree) that fragmenting the front page is a bad idea. We must all see the same front page to judge the same quality.


Interesting. However, many people already filter the first page, let it be by points, or by the twitter accounts:

http://twitter.com/#!/newsyc20

http://twitter.com/#!/newsyc50

http://twitter.com/#!/newsyc100

http://twitter.com/#!/newsyc150

And from each filter, people auto-select things that interest them. Sometimes I only see a story when it is retweeted to me.

And this you can't prevent. It lies on the fact that different people have different definitions of what "quality" means.

Which is the core problem highlighted by linked blog post.


The front-page of HN is a filtered list in and of itself. And I for one actually like the result.

Some people would like to see a different filter and thus create one, because they can. That is indeed not something you can, or should want to, prevent.

But the fact that some people create their own filters is not a motivation to not tweak the HN front-page filter in such a way that the front-page matches the intended goal (pg's goal in this case, presumably adopted by the majority of HN readers, more-or-less codified in the guidelines) as closely as possible.


"The core problem ... lies on the fact that different people have different definitions of what "quality" means."

The user is not the problem. The user is the solution. http://news.ycombinator.com/item?id=3170840


I agree with this. However I don't think that existing communities can easily adapt to this well. Instead I think it has to be something that is part of the initial design of the platform so that it's tightly integrated into the service. You could go further than Google and actually deeply integrate a social graph with feed weighting based on who you follow and what they vote on as well as your previous voting record and maybe some shortcut to topic, such as tags.


Why aren't we taking the bayesian approach?


Can you be more explicit?


The concept of a 'honeypot' article is a little ridiculous.

The best you can hope for is "this preferred population of people overwhelmingly dislikes this article". The problem at hand is that you want foster the interests and tastes for that one group of people.

So instead of crusading against lame newbs with a labour intensive system of 'silly articles' (who picks what is an article that ought to be downvoted?), you could compare their weighted voting history against your population of 'good users', yadda yadda yadda.


I'll bet he want to complificate the formula to look something more like

"P(H|Q)=P(Q|H)P(H)/P(Q)" where

H = "User upvoted a honeypot article" and

Q = "User's votes are not a good signal for article quality"

And a similar adjustment for flagging.


Bayesian approach could be nice uf it can be used to increase diversity.


Sounds like a machine learning problem. Showing people the cache of really bad articles would lead to overfitting.

Making each user's vote a vector with the value of the honeypot formula determining its strength, and the vote total for any article a floor function of its total votes would be pretty cool, but might be too computationally expensive--certainly more so than the user-categorization approach.


I don't see any problem. The article is talking about ranking up voters and flaggers, not submitters, so you could continue with your submissions and not be penalized.


I don't know Arc, but I'm happy to write out the idea in simpler instructions than the blog post if it helps.

The implicit approach is very practical. You basically just need to bootstrap it and then it will run itself.

To bootstrap:

- Track if each user has seen an article or not.

- Track how many times each user flags any article, let's call this any_flagged

- Add an admin-only "honeypot" button (or track articles flagged by admins)

- When an admin marks an article as a honeypot:

  1) Increment the honeypot_seen counter of anyone who sees (or has seen) the article.

  2) Increment the honeypot_flagged counter of anyone who flags the article.

  3) Increment the honeypot_upvoted counter of anyone who upvotes the article.
- Repeat until you are happy with the number of honeypots and the coverage of the community. Intuition says if you focus on front-page articles, then you should be fine after 30 or so flagged articles.

Then calculate your super flaggers:

- Apply the h-formula to each user, h(u) = (honeypot_flagged - honeypot_upvoted) / (honeypot_seen * any_flagged)

- Select the top N% to be super flaggers. Again, intuition would say 5-10% is reasonable, but that depends on the way the data looks.

Set it to implicit mode. Now, each article has super flags tracked and when its super flag threshold (percentage of super flaggers) is crossed, you declare it a honeypot. Then you run a process analogous to the one in the bootstrapping phase.


Clever idea in general, a few issues:

1. Sometimes people upvote without reading the article just because the comments are good, and want others to benefit from that as well.

2. Sometimes people upvote to save a submission, since there's no separate save function. For example, I want to send it to a friend later, but not right now, so I upvote it to make it easier to find (b/c HN search is hit-or-miss).

I confess I'm guilty of these, but I doubt I'm the only one.

By way of comparison, take Reddit. Upvoting and saving are separate functions. You can save submissions you think you might want to revisit later. Reasons for doing so:

1. Scanning the headlines quickly, but don't have time to actually read everything, want to save the article and comments for later perusal (lunch break, after work, whatever).

2. Subreddits reduce the cost of upvoting. For example, every time I consider whether to upvote a Bitcoin story on HN, I consider whether it's front-page worthy. On Reddit, that's not a problem, I can just assume it won't hit the general front page b/c it's a relatively niche subject, but the upvote might help it within its own subreddit.

One more potential problem with the idea of superflaggers. If 'social media experts', or spammers, or whatever the people are who game sites like Digg and Reddit got wind of the fact that flagging honeypots could increase the weight of their flags and/or votes, mightn't they also figure out how to abuse that?

A professional, as some of them seem to be (eg, able to spend all day every day doing this), might be able to achieve a denominator very close to 1, and a numerator close to the total actual honeypots.


> One more potential problem with the idea of superflaggers. If 'social media experts', or spammers, or whatever the people are who game sites like Digg and Reddit got wind of the fact that flagging honeypots could increase the weight of their flags and/or votes, mightn't they also figure out how to abuse that? A professional, as some of them seem to be (eg, able to spend all day every day doing this), might be able to achieve a denominator very close to 1, and a numerator close to the total actual honeypots.

Super flaggers receive no special powers other than the ability to contribute to the honeypot score of a given article. Their votes and flags are counted the same as a normal user. I addressed this in more detail in another comment below, but basically the ability to and utility of gaming this system is minuscule.


>Super flaggers receive no special powers other than the ability to contribute to the honeypot score of a given article. //

No special powers other than helping to ensure their submissions are not 'honeypotted' and submissions contrary to their view are 'honeypotted'?

Wouldn't that also downgrade those who hold contrary views to them - as the contrarians would be more likely to upvote the stories that the gamers are helping to get marked as honeypots - thus ensuring that the gamers keep those with opposing views from gaining a position in the quality control caucus?

I notice you're in AI, have you run some formalised tests on how such a voting system would play out?

My personal (untested) preference is towards making voting plain and all scores open and then letting users somehow create their own metric for filtering. Perhaps that won't work on the scale of a successful site though.

Are there actually upvoting and flagging cabals in operation on this site now?


> No special powers other than helping to ensure their submissions are not 'honeypotted' and submissions contrary to their view are 'honeypotted'?

Please see my detailed explaination of why this is not a problem [1].

> I notice you're in AI, have you run some formalised tests on how such a voting system would play out?

No. If there is a top-tier conference publication in it, I would be happy to do some MC runs. That being said, this is not really a publishable idea unless I can actually implement it and measure the results somehow on a real site. :)

> Are there actually upvoting and flagging cabals in operation on this site now?

Probably not. There definitely are such rings on Digg and reddit (I know for a fact). This is a general system, so it could be useful on any social news site.

[1] http://news.ycombinator.com/item?id=3166548


>Sometimes people upvote to save a submission,

Isn't this what bookmarks are for?


OT: wow, I always assumed voting on reddit stories was per-subreddit, does a story submitted on multiple subreddits really share the sum of the upvotes?


There's no such thing as "a story submitted on multiple subreddits". Any submission is associated with exactly one subreddit. But an upvote there also counts as an upvote towards sending it to the frontpage.


> Track how many times each user flags any article, let's call this any_flagged

This is a pretty fatal flaw in your plan. Not everyone can flag and it seems if you flag too much you loose the ability to flag. This restriction would have to be relaxed before your plan could be tested.


Seems like a bad idea, that isn't even necessary. I went and looked at the guidelines, which I admit to not having done in some time, and it says:

Off-Topic: Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.

Of the 30 items on the frontpage as I write this, none fit this characteristic. The closest might be the one about Google not removing police brutality videos. But that hardly seems to be a seriously negligent submission.

This what you say is on-topic: On-Topic: Anything that good hackers would find interesting. That includes more than hacking and startups. If you had to reduce it to a sentence, the answer might be: anything that gratifies one's intellectual curiosity.

While there's a lot that may not qualify according to these guidelines, IMO, I'm happy to leave it up to the voters of HN to decide. Clearly people thought these were the 30 most interesting stories right now (based on how voting is done).

I'd argue that better content needs to be written, more than I'd argue that we need honeypots. Maybe some way to promote a really indepth and insightful comment to a front-page submission or something.


That's a fair point. I wrote the post as a reaction to an off-topic article that I saw on the front page. That's not to say that the overall quality of HN isn't superb, but I was just particularly upset that a glaringly off-topic article was so highly upvoted.

This is a general system, however, and certainly addresses a problem which many other social news sites suffer from. I can imagine reddit implementing something like this, for instance. Thus, I think even if one believes it may be overkill for HackerNews (at the moment), it's an important contribution to the question of how to properly and automatically moderate social news sites.


Agreed. You should discuss this with Reddit and other social news sites.


Any suggestions on how? Is there a good subreddit to post this on and get attention?


Which article?


I haven't noticed any huge decline in the front page recently, although I do think that many stories scroll off the new page way too fast. This is still a huge problem for longer articles as well as lectures and podcasts, and basically prevents anything but short pithy content from making the front page.

As far as comments go, I think the main problem is that there are just too many comments that don't really contribute much. The other thing that bothers me is there are a handful of commenters who contribute very little of value and yet are some of the fastest rising in terms of karma. I've always been fairly non-sensitive to mean-spirited comments, so whether this has gotten better or worse I have no idea.

edit: Maybe there the new page could be changed to some kind of queue system for non-breaking news, where it would be limited to 30 new stories per hour.


This is how I classify articles (loosely based on a famous quote by Eleanor Roosevelt+):

-Articles on ideas: +1. Examples from current homepage include "Why we moved off the cloud" -- and that's kind of stretching it.

- Articles about events or claims: "meh" Examples: Batch is the best photo editor, google+ now available for google apps, etc

-Articles on other people, current events, activism: Flag Examples: Google police brutality, Vimeo bans game videos, etc. This is more slashdot material, nothing interesting.

*The E.Roosevelt quote is something like "Great minds discuss ideas, average minds discuss events, small minds discuss other people"

EDIT: I realized after I edited my answer that it doesn't really apply to your comment. I meant to say that given this method of classification, it's noticeable that the quality of articles has decreased, and would love to see more ideas than pointless "they're taking our rights" articles.


The quote is not by Eleanor Roosevelt. It goes back to at least 1901, in the precisely titled Reminiscences of Legal and Social Life in Edinburgh and London, 1850-1900. (Discussed at http://en.wikiquote.org/wiki/Eleanor_Roosevelt.)

Also, I don't think it's true. It sounds true because of the way it is formulated. But great minds often discuss all of these things. Think of, say, David Hume. Clearly a great mind, clearly (IIRC) interested in all three.


If my upvotes/downvotes/flags may be secretly used to profile me in ways I can't know about, and my secret profile might negatively impact my experience on this site, isn't my best option to not participate at all?

In short, with this mechanism I can't trust that doing anything, be it clicking through to an article or flagging one, might negatively classify me. I can't be sure this site isn't trying to trap me somehow.

My advice is to let all of this hand-wringing about such nebulous issues as 'quality' go. It's a fairly decent community here and that's about as good as you could expect from the Internet at large. If you're really worried about what this site has become, simply conclude the 'Arc experiment' and shut it down.


The proposed negative impact is that votes from "bad" users would be counted less. So it can't be that your best option is to not participate: if you stop voting, you already punish yourself worse than this algorithm would.


I don't see how a system like this would improve comment quality.

In my opinion, high comment quality is strongly related to the atmosphere of respect on HN. Making comment scores private helped with this.

Creating complex rules, or rules that mysteriously favor the votes of some users relative to others, leads to the perception that HN is a caste system, even if status is earned over time... and nudges the atmosphere more toward competition than friendly discussion.

The one feature that I think could be useful would be some way to merge stories that are essentially identical. There is a karma incentive for people to post lots of stories about topics that are "trending" at the moment. The more users we have, the more thorough the community will be with this, and it's both a good thing and a bad thing. The upside is deeper coverage of important events, the biggest downside is a fragmented discussion, but in addition the homepage is often filled with 4 or 5 (or more) highly similar stories.

Adding "merge" would improve the s/n ratio, reduce redundancy in the discussions, and make it easier for someone checking HN at the end of the day to get up to speed on what happened and to (perhaps) leave an insightful comment or two.


I think you could do something much simpler, like letting us downvote submissions. I only flag egregiously bad submissions, there are many other times I wish I could downvote though.


Seconded. There is so much stuff that is superfluous or just not that good but not sufficiently bad to flag it.


The second proposal, implicit honeypots, is similar to something I'd been thinking of as a way to address the degradation of comment quality, and could be finagled to address that larger issue.

My idea was that you would start off with a limited number of "trusted" members, and invisibly flip a bit in their profile. These would become supervoters, and their votes would confer or remove some multiple of karma for each up- or down-vote. In so doing, they would exert a proportionally larger influence on the visibility of comments, hopefully helping to highlight good ones and bury bad ones.

The supervoter bit would not be static; it could be gained or lost. A supervoter whose submitted comments received net negative votes by other supervoters would lose their bit, as would one who consistently voted against the trend established by the other supervoters. Similarly, a non-supervoter who tended to submit and upvote comments that were favoured by existing supervoters and downvote comments that were buried by supervoters (before they had been greyed out, to avoid gaming) could, after a certain threshold, have their bits flipped as well.

Ideally, this process would be entirely transparent, with no one but yourself and other similarly privileged users able to see who had the bit. Similarly, it would be best if the change went unannounced. I'm aware that HN is OSS; perhaps it would be better to leave this out of that repo as reddit does with anti-spam measures. The reason for the secrecy is the same as that for hiding karma scores: it reduces karmawhoring and gaming.

As for submissions, I think the problem there is both less severe and easier to solve. HN's front page is still slow enough that it can be hand-curated. More mods would likely be able to keep a handle on things. However, an algorithmic solution could work. Similar to supervoters, have superflaggers: if the ratio of submissions someone flags to those removed is high enough, flip a superflagger bit in their profile. Then, any article in the new queue that was flagged by (say) 3 or more superflaggers would be automatically removed.

The reason I like these proposals, as well as the submission's, is that they are invisible and nobody knows if they are even operational. By hiding these workings from membership at large, I believe it would be possible to have a positive effect on quality while still discouraging the kinds of behaviours that have led to a massive decrease in quality on large parts of reddit.


I'm worried that this would lead to groupthink.


That's a legitimate concern, but I don't think HN is quite that homogenous. I also don't think we tend to downvote out of disagreement, which would be required in order for groupthink to set in. You occasionally see a thoughtful comment expressing an unpopular sentiment get downvoted, but not often and usually not for long.

For groupthink to be a serious threat, it's not simply enough to have the top-rated posts express a given view, you also have to have contrarian views be buried. I don't think that a group of people that pg would be likely to pick would a) upvote all the same things, or b) downvote posts they disagree with. As the supervoter bit would be passed to people who voted in generally the same way as a reasonable number of the superusers, it would be unlikely that people who downvoted out of disagreement would get the bit. Further, I think there's a wide enough range of views amongst the people pg would likely select to minimize the likelihood of a single viewpoint gaining dominance.

There's enough contrarianism built into the basic personality of most HNers that I think we'd be fine.


I have unfortunately seen all the things you describe, for example see any thread about PHP, or the thread about perl and random syntax, or about using an ORM for SQL.

I've seen it in threads about social issues as well, although none come to mind right now.


I've seen it too, and while disturbing, I don't think it's representative of the HN population as a whole, and I certainly don't think it's representative of the types of people who would initially be picked. The goal of my proposal is to counteract the people who do engage in those unwanted behaviours by giving proportionally more voting power to those who demonstrate that they don't.


Classically, honeypots are put into play to avert malicious activity / users.

My gut instinct about it being applied in this format, while extremely creative ( wow - very cool thought experiment ), may have undesirable consequences. The intuition is merely from the fact that you are creating an adversarial premise for a wide-band community with varying maturity and motivation.

n.b. have personal / professional experience wiring such efforts and am certain you know top blokes who're ninjas in the game


> you are creating an adversarial premise for a wide-band community with varying maturity and motivation.

I really have to disagree here. There is this misconception in these comments that somehow my system would let people get these massive egos and encourage them to harm others. That is simply not true, for several reasons:

1) You will never know if you are a super flagger, normal user, or ignored/penalized user.

2) Super flaggers gain no power to move things up or down for a specific article. They are merely there as a proxy for detecting if anyone is consistently upvoting improper articles. If the top 10% are super flaggers, the next 80% are normal users, and the bottom 10% are ignored, then that means the super flaggers will only account for 1/9th of the upvotes on average. And if they flag an article, it will not get removed faster than if a normal user flags it; rather, it will simply increase the chance that the article will be used as a honeypot in the future.

3) A single super flagger has little leverage, assuming you choose a large enough pool of super flaggers. One person will not do much to push the honeypot threshold over the top.

4) It's a moving target. So even if you were to ascertain that you are a super flagger and you decided to try to flag articles inappropriately (in the ever-so-small amount that you can do damage that way), you won't be a super flagger for long. Rather, you'll quickly be drowned out by your own noise and you'll fall off the super flagger list when the next update is performed.

That's not to say this system is perfect. I suppose one could manipulate it if:

1) You were somehow able to determine that you were a super flagger (non-trivial).

2) You were able to get a bunch of evil buddies together who also were super flaggers.

3) Your group is a sufficiently large portion of the total super flagger population, say 30%.

4) The admins did not include some oversight to periodically check up on what was being made into honeypots.

Then, yes, you could go to town flagging things for a while. It's certainly not fool proof, but if you had that large of a coordinated group on HN, you could wreak havoc in much more efficient and straight-forward ways.


It seems to me that most of the issues are caused by too few ideal HNers watching the "New" page and up-voting quality content. Instead, primarily controversial submissions manage to garner the necessary number of up-votes to make it to the front page before falling off of the "New" page.

I don't believe that the proposed honeypot solution would address this.


Random articles from the "New" page could be occasionally inserted into the main page to see if they catch any up votes.


I was not even thinking about the mechanics of your system, which as you've reasoned above and in the article may work beautifully.. merely the fact that HN would start deploying such a methodology opens up a road that may have interesting ramifications down the track.

Am sorry if my response sounds less precise or more philosophical than you want.. but it is well intentioned.

In a democracy , one common problem is that you have to respect others that you think are voting wrongly and put up with bad content. HN as it stands now is a wide-band place.. Eternal September is always going to be a risk.

Another way to address some of these concerns would be to have sub-sections ( much like a normal web-board ) where people are encouraged to discuss some common subsets / topics .. or even have a special section for newer folks.


HN already does things like deadpooling people who are consistently offtopic, trolling or just nuts. I think that unless steps are taken to keep bad content under control, any forum is going to go under.


Thanks for redirecting me here.

>3) Your group is a sufficiently large portion of the total super flagger population, say 30%. //

There will be populations that aren't interested in the original mix. These populations could swamp the site under such a system.

So if HN becomes popular with a particular niche who're not interested in the original mix and they become a large proportion of the population - for example: There are many ebay sellers ("ebayers") that like the site for occasional link they don't find elsewhere. These ebayers are usually inactive. But they start to vote against nerdy tech stuff and always upvote ebay related articles. The site focus will drift and the feedback loop will attract more users that want ebay stuff and put off others from voting (as their votes are getting ignored because non-ebay stuff starts to become a honeypot).

...

Anyway. Would love to play with such a system and see where it goes. Like I said before I'd love it if somehow the site could let you implement this whilst at the same time allowing me to ignore your honeypot system and just have displayed voting (+clickthroughs and saves). That is we'd be able to establish our own metrics. Then people could try different filter algos and choose which gives them the nicest site.


The real problem your honey-pot suggestion has is that there is a diversity of opinions and expertise across the site, and it's not the case that they're easily separable.

Should you discount someone's technical opinion because they are vocal about their political opinion? Because that's essentially what your system would do. And it would do so without notifying them that this was occurring.


I think the hardest part would be trying to discern intent just from someone up-voting one of these honeypot articles, given the loose definition of what's considered on-topic here. You'd be stuck between being subtle and catching false-positives, and being obvious and driving people away / generating useless discussion with obviously off-topic stories on the front page. I'd argue against this for that reason, and because you'd probably alter user's voting behavior to try to "avoid the honeypot" rather than simply evaluating the article.

I'd take a different approach, and argue that most users are capable and willing to filter articles that they think fit the guidelines, however I'd bet that most people don't leave the front page when considering what to vote on. This self-imposed filter bubble of convenience seems to create two separate areas of content, with more pop-culture-leaning tech news on the front page with tons of comments and votes, and a graveyard of dead/questionable/interesting/technical articles without any feedback on new.

I would experiment with adjusting a user's per-article voting power, either silently or with feedback in the form of a voting power average similar to karma average -- however, I'd adjust it based on where the stories are and/or how much feedback they've already received when a user votes on them. I'd discourage voting on already super-popular stories and encourage voting on stories that haven't gotten much exposure (from "new"). You're also forced in the latter case to evaluate an article on its merit, before it has any comments and few points, similar to how you evaluate comments now without seeing comment karma, which seems to have helped with comment quality.

TLDR: Front-page stories with 2k votes don't need another 1k - we know it's a good article. Those 1k votes would be better served picking out gems and interesting "smaller" news from /new and other areas. Disincentivizing voting only on popular stories and incentivizing voting on new/unfiltered new ones would better serve the community than trying to catch people with honeypots.


> Front-page stories with 2k votes don't need another 1k - we know it's a good article.

Once an article has hit 100+ votes, displaying more does seem kinda pointless.


Seeing a bad submission might encourage others to submit those kinds of links, thus defeating the purpose and ending up in a worse situation. Additionally, submissions that are upvoted are kept in a "Saved Stories" section of one's profile. I generally use this as a bookmark for things I want to read [again] later. What if the title is potentially interesting yet the contents are trash? I might save it thinking to read it later and it would be counted against me. Further, there is no mechanism to remove a saved story which would be a worthwhile thing to implement.

A better approach might be to hand-select N individuals you know are solid community members and make their votes count more. Three points per upvoted story rather than one, for example. To spread this further, take the top N users each of them upvotes the most (beyond a minimum threshold, decaying over time, etc.) and give them two points per upvoted story. With N=20, that gives you a pool of 420 people who can influence the front page to a greater degree than most while still keeping it manageable. You shouldn't need to update your original pool very often and the secondary pool can be recalculated once per day.

You could also have a bad pool consisting of those users who were flagged more than once by anyone in the original or secondary pools (again, decaying over time). Their votes could count for nothing.

This approach would anchor the community around known, trusted members and let their actions become the drivers for the behavior you wish to encourage. If you wish you had more members like those you hand-pick to upvote stories, what you're really saying is you wish those stories were upvoted more, so giving them more votes achieves that goal. The secondary pool is then reputation-based as is the bad pool.


I'm actually quite reluctant to flag unless an article is clearly spammy. My feeling is that if the article is getting a little traction and stimulating discussion, the normal voting procedure should handle it. If I upvote a couple of honeypots because maybe I do happen to find them interesting, and perhaps for a reason the submitter didn't intend, then I'll be penalized.

I think there is room for honest disagreement. For example, yesterday someone killed an article that I was preparing a comment for when it had reached a karma of nearly 50 and there was a discussion taking place. It was somewhat political (and therefore dangerous), but I find the OWS interesting because it seems to be an authentic movement that's starting to self-organize.

Some 'foreign-born' (non-US) entrepreneurs I know are watching it kind of closely. So, again, honest disagreement. But if I want to keep my h(u) kosher, I'll have to flag more than I want just to make sure.


I don't like the analogy with "honeypot".

A "honeypot" exists to be a target for abusive behavior. Somebody who attacks a honeypot is making an attack -- they're guilty.

Somebody who votes for a bad story once in a while is just somebody who votes for a bad story once in a while. They're not a criminal, the way a person who attacks a honeypot. They shouldn't be treated like a criminal.

The big problems I see is: (1) different versions of the same story show up multiple times [extreme case: when the front page was about nothing but Steve Jobs] and (2) certain people who write consistently mediocre blog articles that seem to be voted up by voting rings every day.

Other than that, the quality of hacker news is really pretty good.


> Somebody who attacks a honeypot is making an attack -- they're guilty. <

I think the underlying assumption of the article is that people who are upvoting offtopic stuff are in fact attacking the site. But is offtopic content really the biggest issue? Clustering (or lack thereof) seems to me the predominant problem, as you said. For example, right now, we have the Bill Gates thing on the front page twice. And come to think of it, the Android SDK update shouldn't be there at all.

Maybe there should just be an option to merge discussions. Just a simple function where users can vote to merge an article with an older one on the same subject.


Disclaimer - I'm a relatively new user.

Disclaimer aside, I think it'd be interesting to try - perhaps as an experiment, similar to the /classic you posted.

As for how to implement? Well, I'd imagine that the actual programming is relatively easy (disclaimer #2 - I'm no expert on Arc, so I could well be wrong), so I'm assuming you mean how to handle it. As I see it, you have two options. Firstly, submit link-bait and other detrimental submissions yourself, with a dummy account. Secondly would be to allow the admins/mods to actually mark something as detrimental, and retroactively modify each user that upvoted it.


I think it's a pretty good idea. It'd be easy to come up with articles that are very clearly about politics/current events or otherwise quite off topic, yet 'hot topics'. Let's see: Occupy Wall Street, Republican primaries, Drug Legalization would all be ideal targets.

If nothing else, it'd be very easy to run as an experiment to see if there's anything to be seen about how articles like those get voted up and / or flagged, by who, and whether actually utilizing the honeypot data could be put to good use.


Have you considered giving extra weight to submissions by first-year users, rather than just their votes?

If someone from that subset takes the time to find and submit an article to HN, I suspect that carries more signal than a simple upvote, whose effort-cost is near zero.

You might include high-karma users as well, if the first-year members aren't submitting enough.


It completely misses the mark in my view: The problem is an overall decline in quality (i.e. mediocre articles). The presented solution fights spam. Two very different things.

Systems like this get gamed pretty heavily too. In this case a spammer could simply set up a bot that downvotes everything with <0.

I'd much prefer a system that subtly boosts the voting power of certain users. Preferably based on something that is difficult to fake. e.g. Seniority + Karma. Assuming that it is indeed true that the old members perform better than the new ones that should improve quality. This would probably create a HN elite...but if that is what it takes to improve the quality of articles then so be it.

Giving the Guidelines a bit more prominence would help to. e.g. linking to them in the submit page. If new members are anything like me then they simply do not look at the bar at the bottom of the page & so never see the guidelines.


>> If the h-ratio of a user is less than an admin-specified threshold, we flag the user as detrimental to the overall quality of the site...

This would give me pause. I read HN more for the comments than for the articles. The comments frequently of higher quality than the articles. I usually upvote early from the new page so that the topic will get wider discussion. That means that not infrequently, I will upvote a mediocre article or one I disagree with to get the discussion going. If necessary I'll throw in a comment saying why I think the article is off-base.

I would hate to be ranked as doing a disservice for what I am attempting to do. Perhaps I need to flag more Arabic and self-promotion spam to build up my meta-karma, although I think the current flaggers are doing a good job.


My occasional check of the 'new' page shows that most stories fall off the end of the page and are never seen again. Seems overcomplicated to rig a page no one pays much attention to (I could be wrong, but my anecdotal experience over time backs this up).

Wouldn't a simpler metric be how many times a specific user's story posts are flagged? Combine this with a karma threshold for submitting (much like the down-vote threshold) and it seems to me to be a viable option.


I don't like this idea at all. It seems sneaky in that mean, vindictive kind of way. I think my incentive to vote at all disappears if I'm continually worried about the possibility of voting on a honeypot. To me this seems like a way of punishing potentially everyone for something a certain set of users is doing. Also the thought that what I'm voting on is being tracked is mildly disturbing.


I would rather have a system where karma has some sort of multiplier for their submissions or comments, i.e. voting for a comment or submission from someone with high karma counts as f(karma) times as many votes. This preserves the "legacy" of HN by weighting submissions and comments from established, validated users higher than newer ones.


But that favors longevity and undermines the "meritocracy" that many geeks seem to cherish. As a newer voice here, I think it odd I might be less valued (by orders of magnitude) because my contributions over six months or a year or whatever simply because I wasn't user #5.

Based on 120-day rolling karma? Might work.


PG, what about coding up the thing, marking a few articles in the archive as "honeypot articles", then rerender the site (locally) and see how it performs?


>Does anyone have any opinions about whether this is a good idea

The problem is that flagging is used as a downvote mechanism. For example, I've observed that some anti-Apple or anti-Android articles(even if otherwise legitimate) are flagged so that they go lower into the page(you see articles with fewer points submitted later on top, so this is evidence of flagging). This kind of (mis?)usage of flagging can be detrimental to this method.


I think you're right on the money when you mention submission of low-quality articles. These articles end up drowning out the good stuff on the new page, especially with all the linkbait headlines. The honeypot approach would help, but the drowning out of good material on the new page would not be solved with honeypots alone.


I haven't done a particularly rigorous analysis, but for a while I was collecting some statistics on the account age of the submitters of front-page posts, and most of those were also submitted by HN veterans.


How do you measure "comment quality"? Average comment karma per user? Average votes per comment? Some ratio of upvotes to downvotes?

If it's any of these I see a problem similar to what I think has happened to Stackoverflow. I've answered a lot of questions on SO (although virtually none in the last year) and started doing so a few months after it launched. In that time I noticed two things:

1. The low-hanging fruit got answered (generally speaking most of the highest voted questions and answers are early); and, this is the important one

2. The volume of questions and answers was higher. One impact of this (IMHO) is that people see each question for a shorter period before it drops off the front page. It's that period on the front page where the bulk of votes come from (although there is a significant long tail, particularly on questions that surface on the front page of Google a lot).

HN is of course a news site so the "low hanging fruit" doesn't equally apply but there is some relationship. Each article is, to a varying degree, part of a larger debate whether it be about free software, user rights, the future of the computer/phone/tablet/whatever, etc.

In the early days of HN such debates probably had livelier discussion due to previously unsatisfied demand. Now though I think there is a lot of rehashing of the same point. Early users probably have reduced interest in this.

To (2), if there are more active users commenting then it stands to reason that each user sees less comments overall. Depending on how this scales you may in fact have less people overall reading and voting on a particular comment.


Not by any number; just by how thoughtful the comments seem.


As someone who joined a while back at a friend's suggestion, and only recently started getting involved (the past year?) I feel compelled to say "My bad," to everyone.

I'm finding my footing I feel, but yeah, I've put up some stinkers, but it was mostly in exuberance to see what I feel is a high class community thinks about what I have to say. I don't think I had a good understanding of the place when I first started frequenting. That comes with time.

So, I'm not excusing shenanigans, just saying it's often with the best of intent. Compound that by increased readership...


I'm not sure /classic is a reliable way to check for home page quality: the number of submissions per unit of time is much higher now, and the "raising" news, the ones you see flashing in the home page with the first few votes, are now affected by the new user base component.

In order to be reliable the subset of users at /classic should also see only new submissions by classic users, and should also see the home page itself as composed only of classic news and votes, otherwise they are affected by all the rest.


Perhaps the 'high quality' first year users left when they could no longer stomach the decline and only the 'low quality' first year users remain.


The similarity is quite interesting - some articles don't even move position. Do you have one that shows only green users?


The thing is, upvotes are a function of what shows up on the front page. So it makes sense.


There might be some selection bias, users who still like the contents of the current frontpage are more likely to still be around than users who as the frontpage shifted away from their taste visit it less often.


We're assuming people are using the vote buttons to vote, and not save articles for later. What if a percentage of people said "that looks good, that looks good", vote for them so they can review them in their 'saved stories' at a later date?

What if HN would not allow people to vote on things unless they actually clicked on the link?


I think this is a good suggestion. Even worse than people using the vote button to save stories for later: I've found myself occasionally voting on articles because I assume I'd like the article, without reading it.

I scold myself whenever I catch myself doing it. But I bet others do the same thing.


I do the exact opposite. I only upvote articles if I read it and know that I will almost certainly want to read it again later. I'm not sure if this is better or worse practice...


I'm guilty of voting to save for later. But not just the article, the discussion of it as well.


The quality is relative. As HA has gotten more popular the level of experience of the average user has gone down, and they now dicate more what "quality" is. In the past you had a higher percentage of core "hackers", and they were posting things that they find of quality, but now you have much more varied submitters and voters. More people equals lower lowest common denominator.

To fix you need segregation. A ultra code/tech area, a business/start up area, and a fanboy/fluff area. You basically want to give high signal to noise ratio for the different groups of people. For example for me, I would just visit the code/tech area and not have to deal with all the noise of the other sections.


Am I missing something with the second formula?

h = (f - v)/(s * t)

The perfect flagger would have v = 0 and f = s. His total flag score is t = s + x, where x is the number of non-honeypots flagged. His score would be:

h = s/(s * s + s * x) = 1/(s + x)

As s => infinity, h => 0. This score would actually punish a good flagger over time, no? A perfect veteran flagger with 100 for 100 honeypots flagged would have a lower honeypot ratio than a perfect newbie flagger who's 5 for 5 honeypots flagged.


Correct you are! I've updated the article with a fixed version of this formula.

This version should in fact rank the 5 for 5 guy slightly lower than the 100 for 100 guy. It's still not perfect but I believe it does its job in theory.

Thank you for pointing that out. :)


You're welcome!

The current version looks better. I would add that the flag adjustment should also factor in honeypots seen. This version will punish flaggers (good or bad) with a large flagging history who've seen no honeypots.

-(1-(f/(t+1))

goes to zero as t goes to infinity--desirable when someone has seen and not flagged honeypots, but not as desirable when someone hasn't (which would be any user with many flags at the time of algorithm implementation).

I went back and read your footnotes. Footnote [1] is indeed a linkbait-y article. To me, it demonstrates a behavior described in another comment here: upvote as a save function. The title looks interesting, and in the middle of a work day, one may not have time for a long article. There's even more incentive to use it as a save as the front page volume cycles more.

Personally, I think that this 'noise' in upvote value can be mitigated by adding a separate save function and perhaps even eliminating an upvote history visible to the user (migrating current upvote history over to save history first so users can still access their clippings).


Let's so not do this. The worst possible way to educate people is by showing them all the things they shouldn't do. It's painful and slow compared to cutting to the chase and showing them what to do. I see no good coming of this. None whatsoever.


The whole point is you show them nothing with honeypots. They simply get their vote penalized in the background because they didn't follow the guidelines of the site.

When a site is growing, there is no way to handle the constant influx of new users. The result is a dilution of quality on the front page, at least as measured by the guidelines of the site.


When a site is growing, there is no way to handle the constant influx of new users.

Just because we don't yet have an established means to effectively do so does not mean it cannot be done. I would rather work on solving this question and other important questions concerning managing culture. Manipulative tactics like this have very serious limits and substantial downsides. I used to give a lot of parenting advice and I can't tell you how many parents have essentially asked me "How do I manipulate my child into being less manipulative?" And the answer is you can't. They learned that crap from you. Don't like it? Then stop doing it.

The same applies to online forums. "Do as I say, not as I do" fails as a moderating tactic just as badly as it does as a parenting tactic, only worse in some ways because it's magnified thousands of times (ie by the number of members emulating what the leadership does) rather than a handful of times (ie however many children you have at home doing the same stupid stuff the parents are doing).


I am not sure if we disagree or if you are fundamentally misunderstanding the proposed solution.

There is nothing preventing using both methods simultaneously. Your approach is to teach people how to act. My approach is to punish people who break the rules. They are not mutually exclusive techniques.


A) Unless I completely misunderstood your article, you are suggesting the admins put out links to things they do not approve of to see who screws up and upvotes them. In other words, you are suggesting that admins basically model what not to do by doing it themselves. This is precisely one of my points: Don't do yourself what you don't want others to do. Lead by example.

B) There is a time and place for "punishment" but it should be a last resort, not a first line of defense. It fosters an uncivilized environment and is therefore counterproductive to solving the issues people here most strongly express concerns about.

So I can't say I agree with your assertion that they are not mutually exclusive techniques. They mostly are in my experience.


> Unless I completely misunderstood your article, you are suggesting the admins put out links to things they do not approve of to see who screws up and upvotes them.

Great! We now know that you did not completely understand my article. :)

The whole point of implicit honeypots is to leverage the fact that articles are already making it to the front page that violate guidelines (e.g., politics, religion, etc). The admins can then flag these articles, so as to not have to spam their own site.


I have to say, I tend to partially agree with both of you: admins shouldn't be submitting articles that won't add to the discussion, so I would drop the first part of your proposal.

For submissions that have already made it into the site and are detected to be honeypots, those votes and flags would be used to punish users.


As a new user, I can say that what attracted me to the site is its quality and the number of links that lead me to the edges of my understanding of a given topic... I feel pushed to learn more.

At the same time, I would like to contribute and also feel compelled to express my opinion at times, but don't really feel free to do so unless it is an area where I feel pretty confident that I either know what I am talking about or have something to say that wasn't already said. I've noticed that just saying "wow that's cool" is frowned upon. I've also seen several threads where comments are downvoted to invisibility and I can't figure out why. Sometimes later they are upvoted again, sometimes not. But I feel like I am learning what is and is not acceptable... and as I increase my knowledge on topics that I came here to learn about, I hope to have more insights to offer back to the community (right now I can't say that I do).

I guess my points here are:

A) It's already a great site with quality much higher than a lot of other message boards. B) It's hard enough to figure out what is ok and not ok to comment on. C) To keep the community vibrant, presumably there should be some tolerance for and encouragement of growth in posters' ability to contribute.

This talk of "punishing" is discouraging. I suppose if there are already enough people here to understand what the community is supposed to be about, and if that group is self-sustaining, then there is no need to worry about attracting new users and exclusionary tactics are not a problem.

Quality is what you (or we) make of it. I've read thought-provoking comments on topics that are probably a bit off the reservation... and seen interesting segues inside threads that take me places I wouldn't expect.

Another approach might be to seed the front page with articles that are good examples of what the community is striving to focus on. Maybe put a green sprout next to it or something. Add one more voting mechanism for people at whatever karma threshold: a vote for "exemplary" status. I suspect that not every regular upvote would translate into an "exemplary" upvote... the front page would reflect the interests of the community, and if it was bare of exemplary articles, I have no doubt users would soon vote some quality articles onto it. My own preference in dealing with people is to give them an easily accessible mechanism to exceed your expectations instead of finding ways to punish them for not.


This talk of "punishing" is discouraging.

The chill effect very seriously concerns me. Assumptions of guilt do enormous harm to trust and undermine genuine civility. People need to feel it's reasonably safe to open their mouths and they need to feel they don't have to walk on eggshells or be perfect, that there is some room for being human, making mistakes, and so on. Robust discussion cannot thrive without some tolerance for friction. Finding ways to lubricate the process is good. This proposed approach is not lubrication.


I guess the message that this sort of moderation (whether done by human or by algorithm) sends is "don't come here unless you already fully understand and appreciate the ethos and mission of this site, and don't post unless your contribution is going to be something of the highest quality possible, according to the standards of the site".

Which is fine as far as it goes, but basically when you boil that down it's "don't screw up, or else."

That isn't what attracted me here. What attracted me here was reading interesting links and thought-provoking discussion, and thinking "man, I need to up my game so I can participate meaningfully".

If the goal is to have a members-only kind of retreat from the mundane, then I suppose the notion of creating an underclass of posters who don't even know they are being ignored makes sense. But in that case, why not take it a step further and just require applications and screen out members in the first place?

If the goal is to grow the site and generate more traffic, then I would submit that encouraging people to emulate quality contributors is a better approach... why not flip this algorithm on its head. Instead of hell-banning those who score poorly, add in a karma boost for those who score optimally... and an indicator on articles that meet the site criteria for quality.

People don't like to do as they're told, but they sure like to do what got somebody else a gold star.


What attracted me here was reading interesting links and thought-provoking discussion, and thinking "man, I need to up my game so I can participate meaningfully".

I participate to up my game. This approach tends to kill that possibility (or at least contribute to slowly killing it).

If the goal is to grow the site and generate more traffic,

As I understand it, the actual business goal of the site is to help YC screen applicants: Your user-name is a required part of your application to YC and (if no one else) PG will go check your comments. Since start-up founders tend to be young and therefore probably a bit socially wet behind the ears, it seems to me that being too controlling about the site in that regard is potentially a bad business decision.


Can be performed simultaneously, but it is fairly common belief from psychology (mostly from Skinner) that positive reinforcement is better than punishment when it comes to altering behavior.


Sure. I totally get that and think it's a great area for future exploration.

A couple of points though:

1) You are not technically giving any reinforcement, because the agent in question (i.e., the user) does not perceive any change in the environment.

2) The possible pageviews that HN can drive may offer sufficiently positive reinforcement for people to continue violating the site by creating link-bait articles.

3) Honeypots are merely a fail-safe to prevent degradation. In a healthy community, one would expect that very few people would ever actually have a sufficiently low h-ratio as to be ignored.


Could go fully complicated path;

1) Public voting: revealing who votes for who on articles. If people want their votes public, they can mark themselves so (Hopefully opt-in to public votes).

2) Blacklisting voters; let people mark public voters as bad as a form of a blacklist. May lead to haunters who post but no one can see.

3) Whitelisting voters; only those who vote for articles are valued more or absolute. May lead to 'power voters' seeking votes but that happens already "Vote and add to the HN discussion here".

People seem to crave certain votes over others. I have no idea regarding comments. It's a mixed bag.


Even more complicated would be http://en.wikipedia.org/wiki/Collaborative_filtering, increasing the weight of votes from users who vote like I do.


The main feature I'd like is a killfile. USENET had this; too few websites do.

I'd like a way to 1) killfile comments by particular users and especially 2) killfile articles by keyword or submitter.


That's easy enough to do with a Greasemonkey script or other extension. Are you interested enough to write it?


there's already enough junk on the /newest page that makes it difficult for legitimate articles to gain traction. adding fake articles to it is just going to make the problem worse.

i would rather see flagged articles get removed from /newest quicker, and have some mechanism for letting articles with at least 1 other upvote, or maybe those submitted by users with enough karma, to linger on /newest longer than they would otherwise.


This is one of the biggest things that needs to get addressed, I think. Looking in the queue, there is a lot of stuff that is obvious spam (user just joined, trying to promote a useless conference or something).

It makes finding the goodies tougher, and it makes especially link-baity things more likely to hit the front page (because they stand out, and can get the 2-3 votes needed before they drop below the fold to actually stand a chance of being seen by anybody).

Not to whine, but there was an article that I wrote recently that I thought a lot of people here would enjoy. It was something that most hackers that I showed it to, loved, so I figured the people here would too.

The people that saw it, did. They tweeted about it, liked it on facebook etc, but it never saw the front page, meaning it never got very many eyeballs on it.

Now...I can see this because when I write the article, I can watch the traffic. I see the same trends (without being able to look behind the scenes and see how much traffic is actually being sent) on anything else I submit.

Look at stuff like this: http://news.ycombinator.com/item?id=3109235

This is the type of thing that got me originally addicted to this place. Maybe I'm just getting smarter, but lately the articles here seem a lot less hacker-oriented. It's a lot more business-gossip.


> adding fake articles to it is just going to make the problem worse

Hence the implicit honeypot extension that I proposed. :)


your honeypot concept appears to be targeting sockpuppets or voting rings that try to vote up bad content that gets flagged by legitimate users. that's not the same as legitimate articles that don't get enough upvotes because they roll off of /newest too fast.

even with zero spam, the site is now big enough that legitimate articles can get submitted at a rate that makes the /newest page move too fast for things to get traction.


If the /newest page is moving too fast, how about aging articles not based on time, but on the number of times they have been clicked by users.


Since the honeypots are not actually being prioritized, I believe this could be addressed by only testing new or controversial users. I.e. a user with a good h value should rarely or never see honeypots on their front page.


It seems this method works on the same sorts of psychology as when the Russian KGB would test people to see if they are loyal to the state by having agents make anti-government statements, and then see which citizens report them.


Another missing feature is the automated grouping of similar posts. Having five times the same (tech gossip) news on the first two pages is annoying, even sad, considering the state of the art of computer science these days. HN could definitely be a better flagship for hackerness IMO...


> Having five times the same (tech gossip) news on the first two pages is annoying

Especially when they're all to blogs with sloppy / link-bait / contrarian reporting of the same source article which is usually more interesting.


Although not exactly the same, honeypots are similar to meta-moderation on slashdot. I am not sure if they still have it, or if it played any part in slowing a decline, but slashdot is very ordinary these days.

Personally, I would love to see more startup related articles. I don't care about the stuff I might pick up from elsewhere, such as Techcrunch articles, or Gruber's opinion, or how A sells more than B, or Politics.

IMO, front page can do without: 1. Google denies requests to hand over data 2. Samsung overtakes Apple. Last week, Apple overtook someone else. 3. Gates to students, "....." 4. Righthaving, copyrights and piracy 5. Forrester's thoughts about supporting Macs in IT 6. Stallman v/s Steve Jobs. 7. Ripples visualization.


Maybe the article itself is a honeypot.


It seems to me that instead of worrying about fancy ways to enforce the guidelines, perhaps the guidelines should be rewritten to be more explicit about what is and isn't on-topic.

This:

Most stories about politics, or crime, or sports, unless they're evidence of some interesting new phenomenon... If they'd cover it on TV news, it's probably off-topic.

is pretty vague.

The worst threads are the ones straddling the line between "politics" and "economics", where a lot of people with bees in their bonnet get a chance to wheel out their favourite hobby horses (with apologies for mixing equine and apiaristic metaphors). These are the stories I'd like to see squashed, somehow.


This is deviously awful, yet highly effective. It makes for a desperate underclass of the automatically ignored, yet never lets them know that they are members of said class

Better yet, it accuses them of deserving it.

Ten Machiavelli points to you, sir.


Thanks? :)

But this is not serfdom. You have mobility in the case of implicit honeypots, because if you follow the guidelines well then you'll float to the top and become a super flagger. And even better, if you stop consistently upvoting crap, then you will rise from the ignored to the heard again. :)


Ah, but only if the honeypots are chosen to match the explicit and stated guidelines -- which are currently often ignored. Should that drift, the tone of the site will change slowly, invisibly and inexorably, and the old guard will be automatically shifted out with a silent coup de grace.

The honeypots become a way for moderators to upvote or downvote the whole tone of HN and do so without telling any of the users.

I look forward to the bot- and crowd-based tools that will evolve to watch the front page of the site and try to guess which articles increase or decrease your HN influence. It's a mathematically interesting problem.


You probably meant "HNfluence."


You could try and trap with a honeypot.

Or you could educate and convert.

It may be that the decrease in comment quality is at least due to an increase in exposure of the wrong sort of entrepreneurial hacker motive "take VC, do whatever it takes, exit, be financially independent" as opposed to the right sort of entrepreneurial hacker motive "serve the community honorably at a profit", in the spirit of Packard, Hewlett, Bezos, Edison, Ford, Watson.

The arc of the startup has become more about 15 minutes of fame, and less about hundreds of years of employing thousands of people. More about not offending and not doing evil, and less about asserting truth and doing good. Culture has become more about free lunches and less about doing hard things and standing in the gap when it hurts. Some have forgotten what humility means, that "we are all grains of sand", that we exist "for others" not "for ourselves", to serve and not to take. And some no longer believe this is even possible.

If HN will begin to reward the others-centered motive, and rebuke the self-centered motive, then the ground will be prepared for the true startup spirit to again take root and flourish. If we can educate the next generation of hackers, and get the motivation right, the methods will follow, and there will be less and less need for honeypots.

To do this, there needs to be a Hacker Credo, and it needs to be at least as radical as the Johnson and Johnson credo, and as definitive and steadfast as Henry Ford's magnum opus "My Life and Work".


>You could try and trap with a honeypot. / Or you could educate and convert. //

So something like when I downvote/flag you put an info box up saying "this post was upvoted by 94% of top ranked users, are you sure?"?


If the h-ratio of a user is greater than an admin-specified threshold, we flag the user as detrimental to the overall quality of the site and their upvotes would either be discounted or ignored entirely.

Nit-picking here, but I suppose tansey meant "If the h-ratio is smaller", rather than "greater", since you'd want to ignore upvotes from those who upvote honeypots too much, rather than flag them too much.


Yes, thank you! I originally had the formulas differently and forgot to update the text to reflect the flip. :)


What is link-bait? Seems open to interpretation. Some topics are highly controversial and bring passionate views out, but should they be banned because of that?

Honestly, I much prefer the pure coding articles or stories about code or coders than the "how I launched in 36 hours and had one million users" articles. In my opinion, the latter are link-bait and decrease the value of HN.


I don't expect this comment to get read, it's likely buried under the 100+ already here, but here is my two cents on the subject...

Chasing spammers with greater and greater automated systems inevitably starts catching real people in the net. People that don't know they are in the net and people who otherwise contributed to the community, get enraged at the fact that their contributions are obviously being ignored.

Over time, the only people that get through the ever-growing net of automated spam blocking are smaller and smaller, eventually turning the site into an effort driven by a small group of users so highly rated that the spam algorithm simply doesn't look at them anymore. In Digg v3 parlance "super users".

Digg had one of the most advanced anti-spam algorithms in social news for 3.0 and they STILL couldn't control it as the site became dominated by a few select people that has escaped the initial watchful eye of the spam-algorithms.

Once their "rep" was high enough, they became impervious to getting knocked down by it.

Unfortunately for all the new users, there was no hope unless they played EXACTLY by the rules of this nebulous anti-spam algorithm that no one was able to tell if it was doing a good job or not... unless you had people manually review the spam submissions all day long which is impossible at this volume.

The net-net of these honey pot and highly advanced ideas is that you catch a lot of decent people in the net and they have no way of getting out.

That is a lot of time spent on fighting a battle that isn't really the right focal point.

The reality is as this sites popularity grows, submissions and comments are going to get more normalized. That is the nature of folding more and more people into the mix.

That isn't spam, that is human nature.

Create a group (of any kind, like organizing birthday parties) of 3 people and see how it performs and behaves. Now add 40 people to it... it will be significantly less efficient and more "spammy" with stupid email forwards and questions about international deserts being "appropriate".

This isn't spam, this is just the nature of a much larger group.

If you deploy a spam algorithm and start muting half those people, you might knock out some of the distracting emails (at least ones that the person writing the spam filter deems distracting) but you also piss off half the group that goes elsewhere to contribute.

Digg v4 took this to an extreme and we saw what happened with their community. Reddit still plays by their original rules even though they dominate the social news sector with traffic and they manage just fine.

If HN was crushed by pharma submissions and link bait I'd say we have a problem, but traffic seems to continue to grow and I haven't seen any obvious degradation in the last year.

I am sure that HN of today is much different than HN of 3 years ago, but that doesn't necessarily mean worse. If the people complaining about HN's quality really mean they just want a different type of elite site that isn't open to all this riff-raff (I consider myself riff-raff), that is a lot different problem than spam-blocking.

This idea that every submission should be amazing and every comment will make you cry because of its intelligence is not realistic.

The site is fine.


If this proposal were implemented, I might only upvote comments and articles that I think the top HN users would like, not necessarily the ones I like, turning HN into a cliché of HN. "How to bootstrap your minimum viable product using Node.js". "Scala, Clojure, or Erlang?". "LISP for Bayesian A/B testing."

I'd much prefer a system which correlated my votes with other users and preferentially showed me articles and comments which matched my own tastes. Sure, if I only upvote to match my own biases, I'll get more biased articles. But if I also upvote good but contrarian opinions (and I would) I'll also get more good and contrarian opinions. Best of all, this encourages non-strategic voting--so, later on, if you find a good use for someone's voting record, you can trust the veracity of that record.


Things tend to look better the further you move them into the past. Nostalgia. Things just aren't anymore what they never used to be.

HN was and is the same. IMHO the best place of its kind. While the article shows an interesting formula, I don't think there is a need for it on HN.


The idea is neat. Concerning the terminology, it might be more appropriate to call it a honeytoken ( http://en.wikipedia.org/wiki/Honeytoken ) than a honeypot.


No one will upvote anything anymore. They'll move their cursors to the up arrow and pause for a moment.

What if this is the one? What if this link is the buried landmine that will explode and destroy my perfect Hacker News karma score. I can see the headlines now: "Respected Hacker News User Clicks on Obvious Flamebait" Think of the scandal.

And then they'll move their mouse cursor away, pining for a HN where they can express their opinions about articles without worrying about what the group will think.


HN needs to grow a social graph in the background. Users often fall within the same threads and discussions without even noticing it. The graph should then be used to personalize results, on a group or individual basis. This a call for a fragmented view, but with a social touch, preserving the herds around multiple topics. A button could let you opt out the personalized view.


Reddit attempted this but gave up. From what I can remember, the personalized stories were not noticeably better than random.

For instance, here on news.yc, I posted in a Iphone thread today, but I don't actually want Iphone news highlighted. I posted in a Steve Jobs thread, but I certainly did not enjoy the cacophony of stories that flooded the main page following his death.


Scrobbling applied to upvote weighting? Weight other people's upvotes according to those who align with your own upvotes - if you upvote pictures of cats that's fine; it's just unlikely to align with other peoples front page who don't what pet photos.


This discriminates against users who cannot flag and users who do not flag. Since the only way to improve one's h-value is to flag more honeypots, it basically means that someone who can't/doesn't flag will have at best an h-value of -1. So would the h-value not be counted for people with few/no flags?


I don't think you need explicit honey pots. Just give a select few special up/down votes that mean "this should never be flagged" and "this should never be upvoted". Then use this labeling of the data to compute a metric for vote quality. (And I'd recommend these moderation type votes be retractable).


Interesting. I've mostly been a consumer here and rarely submit or comment but even so I never realized there was a guidelines page. Honestly, unless I need to find a "contact us" link I rarely look at the footer or any website. Perhaps a little more visibility of its existence would go a long way.


Honeypots would be great - they'd sort out lots of problems.

But there's still the problem of people submitting lousy articles; or submitting blogs / reports about an article instead of the original article. These aren't just new users either. Some of them are established long time users.

Some way of sorting those would be useful.


It might be more effective to simply have a page showing sample good comments, so people have a reference point when deciding to comment. I'm not sure how to pick a list of good comments, but it would be best if it specifically included examples of comments that went against HN groupthink.


I think it is a really neat idea, and certainly couldn't do any harm.

It is targeted to solve a specific problem that DOES occur on HN. Not necessarily every single day, but often enough that it would be nice to have a countermeasure.

I do think that implicit honeypots are the way to go, rather than explicit.


Can't the same problem be solved by giving additional qualified members the ability to downvote?


What about an initial burden-to-entry, of the sort of peer review that journals do (I.e review of links from the community before being up for voting). We ve been testing it on http://textchannels.com/


This is just supervised learning. No need to be so fancy about it.


It's not really supervised learning. I suppose one could argue that the bootstrapping phase resembles supervised learning, since you are in fact measuring how well each user discriminates between honeypots and acceptable articles. However, after that it's more like unsupervised learning.

If I were going to use this terminology, I would say that implicit honeypots are a generative model that is bootstrapped via a discriminative learning phase.

And who is being fancy? It's not like those formulas are that confusing, are they? :)


Might it be simpler to penalize people who vote for an article which later becomes flagged/killed?

Maybe a temporary ding on their votes' impact.


Since the decreased quality can only be perceived by a few (if everyone noticed it, there would be no problem, right?) perhaps those could be selected by the benevolent dictator to get 50-point votes or something? IOW, moderation.


Article quality, much like beauty, is in the eye of the beholder. Therefore, quality assignment should be calculated specifically for a particular user, based on all available criteria, using a method selected by said user. Methodologies based on group-think or admin-think will always lead to a measure of quality which is "ugly" or "bad" for someone, at some point. So, center quality around what the user wants. In today's social networks, central management of data quality is an absurd notion left over from the early days of the Internet. It always leads to data deletion, user exclusion, or other forms of censorship. All data should remain, but should be filtered, for each user, based on what the user wants. To that end, the social network's job is to provide more selection criteria, for all users, and better methods to put that criteria to work, for each user.


Let's post some honeypot suggestions. I'll start with two potential honeypot comments:

"Fuck Republicans."

"Fuck Democrats."


yeah only take people who upvote both




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: