True. But credit where credit is due. Very cool analysis for a throwaway blog post specifically manufactured to garner karma.
Only thing I'll add as a data critique, the negative factors are reported as things to avoid. But, in fact, all of the reported on titles actually made it onto the Hacker News front page (1). There are an awful lot of submissions that never make it that far. In fact, the significance of the findings indicate that those terms make it onto the front page A LOT (2). I don't think the negatively correlated terms should necessarily be viewed as failures. Just less successful. My own suspicion is that those titles do draw eye-balls, but someone using titles like those is also likely to be kind of a bad writer, preventing those stories from getting upvotes. It would be very hard to prove a correlation between quality of title and quality of writing, though.
(1) I believe. Hard to tell from the post.
(2) Otherwise there wouldn't be enough data for them to be significant.
The point of the article was as I understand it, which influence the title has independent of the data. Stuff like quality of writing will just be noise that does not matter anymore as long as you have enough data.
They're not exactly in rank-space .. they discretize to the binary variable whether or not an article made it into the top-20, then use logistic regression to model that. so the coefficients are in log-odds space of that indicator
Yeah, but instead of looking at if something got into the top-20 (binary 1/0), he's saying that they should have modelled the absolute score of an article. This would give you a way of weeding out the cruft that hits the front page and disappears quickly.
Another way of looking at it would be the amount of time that a post spends on the front page.
We spent a little time modeling various transforms of absolute score. The top features are essentially the same, but the coefficient variance is a lot higher. We're also interested in modeling rank or mindshare "stickiness" -- some articles remain in higher spots longer than others.
Very nice, but the analysis seems to assume that HN rank is determined by the headline and not by the content. (More precisely: for the analysis to give useful guidance to would-be HN headline writers, it needs not to be the case that content features correlated with headline features make a big difference to HN rank.)
My proposal for a good headline according to the numbers in this article: Showing why impossible future controversy survived the problem could hire data. Score: 1.3 (could) + 1.2 (problem) + 1.3 (survived the) + 1.0 (controversy) + 0.9 (impossible) + 0.7 (why ___ future) - 3.3 (11 words) + 2.6 (showing) + 0.5 (hire) + 1.9 (data [END]) = 8.1. For comparison, Why showing the future is essential to acquiring data gets 1.4 (essential) + 0.7 (why ___ future) - 2.7 (9 words) + 2.6 (showing) + 1.7 (acquiring) + 1.9 (data) = 5.6 -- except that it doesn't really get the points for "essential" (not at start) or "why ___ future" (two words in between) or "acquiring" (not in second place, word isn't quite right). Of course my headline has the little drawback of being total nonsense.
Great -- I'm hoping L1-regularized logistic regression will become the standard first thing to try in these quick-n-dirty "predict response variable from text" experiments. That's our approach too. (I assume this is L1 or similar since you mention regularization causing feature selection.)
[[ Edit: deleted question about what 'k' is for the discretized 1{ rank <= k } response. It's mentioned in the article ]]
yeah pretty strong l1--most features were 0. we binarized rank on I_{rank<=20}. it turns out there are tons of articles beyond the first page that stay low forever. check out the interactive viz vad made: http://hn.metamx.com (warning 2.6MB compressed js ahead)
Another question, how are standard errors calculated? I assume they're not from the bootstrapping since the p-values clearly aren't from the standard errors ( +/- 1.96*se is crossing coef=0 for several cases but with small p-values). The other way I would think to get p-values would be the percentage of bootstrap replicates that have (coef==0). But for only 20 replicates you're stuck with p=0 or p=0.05.
I'm genuinely curious how to do coef significance testing for L1-regularized models. I once saw someone ask this at a Tibshirani talk and he said "oh we have no idea, we've resorted to the bootstrap before".
to be honest we just recorded the coeff values for each replicate and did the bootstrap variance calculation.
% of replicates with (coef==0) is potentially much more clever, especially since that's the test we want to perform anyway. i'll run that over the data and see what changes.
I think the question is these don't look like NormalCDF(coef/se) p-values given the coef and se you report. They tend to be too small.
From a frequentist perspective, counting zeroes don't make much sense because under the null of coef=0 there is still a chance you don't estimate coef=0, even after regularization.
I though this problem was with Digg, but I've experienced same with my submissions. It's funny that people judge content by headlines, we need a better way.
Find someone who reads Hacker News [1] and blogs. Subscribe to their RSS feed. There's a trick to [1], no doubt. But for most people the time savings far outweigh the occasional mismatch between your interests and the delegate's.
[1] For the same types of articles you do, ideally.
That's exactly what I do, though I don't read HN every day I have a script off in the cloud looking at the comments RSS feed, showing me every article that certain people comment on. As a quality filter, I find it better than the front page.
Curious behavior arises from HN's Feed inside Google Reader with "Sort by Magic" turned on... it seems to keep the good stuff towards the top, but anything really spammy and sensational occasionally gets the top place (so watch out for anything hitting #1 in there suddenly), but then you tend to miss some of the more obscure goodies, which arguably I miss from time to time anyway. Still, it is a curious different ranking, probably mostly driven by Google Reader "likes" and sharing.
Why showing the future is essential to acquiring data
Noted.