Hacker News new | past | comments | ask | show | jobs | submit login
Alex Payne — Fever and the Future of Feed Readers (al3x.net)
77 points by GVRV on July 19, 2009 | hide | past | favorite | 25 comments



Here's my rule: If a feed emits more than about 5 items per day, it's probably a shitty feed. And more importantly, that anything that is actually interesting on them will be repeated here, on Hacker News, or on Digg, or reddit, or on one of the feeds that I do follow.

I largely use my feed reader to make sure I don't miss stuff from the sites that publish much more irregularly (and so that I don't have to periodically check them).


Nearly 2 years ago I started using Google Reader. I signed up for a few great feeds and read it every morning and every night. Over the following months I signed up to more and more feeds, eventually to the point where my reader was flooded with up to a thousand articles per day, and I struggled to find time to do my work between keeping up with this influx of information.

Then one particular week where things got too heavy, I just didn't have time.

Now, I wouldn't say I have OCD, but I really dislike doing things out of order. I can't watch a trilogy out of order, I can't watch a new TV series unless I start at S01E01. When I returned to my reader I suddenly had 6000+ articles to read, and dreaded the thought that I'd have to read some out of order.

I don't know if I'm alone here, but eventually I no longer enjoyed the prospect of opening Google Reader. To keep up with all this information, I felt I had to, but it ceased to be fun. It's probably been a month since I've opened it, just casually reading reddit and HN, as well as a few forums I have in my Firefox Toolbar and a few emails a day I've subscribed to.

One day I will likely go back on Google Reader and somehow cull the hundreds of subscriptions, all neatly categorised into topics and subtopics, back down a number I can read in an hour per day. But I suspect I'll find that difficult, and so instead I just stay away.

And that's my story about RSS.


As always, it depends.

I subscribe to a small number of feeds. But several of them are themselves aggregating multiple feeds (e.g., running some sort of "planet" software), and so can produce lots of items depending on who's active on a given day.

I find this to be fairly useful (since it lets me get good coverage of particular niche topics I care about), but I'm also very careful about what I subscribe to and who I trust to choose the feeds which get aggregated.


I disagree to a certain extent. I see your point on using a feed reader for irregular sites but I think you’re just as likely if not more likely to accidentally miss something on a site like HN or Digg because there’s such a high volume of posts.

The fact that you’re trusting "the crowd" to do your filtering at all presents a problem. Because even in a community of likeminded people like HN we all have our own special interests which aren’t necessarily shared by the crowd (one example, I program in .Net but most HN readers don’t so a lot of good .Net stories never make it out of the "New" section)

So in the end that's the problem the author is laying out. How do you take a large number of messages and make the feed reader as effective as it is with the small sites? I don't know the answer either but I think it's still a huge problem being left on the table.


My rationality is thus: High volume means general interest. I do not want to be the filter feeder for general interest. Importantly, I don't care if I miss a few things; I'm eliminating the vast majority of false positives. I run the risk of missing a few false negatives, but I'm already roughly at my limit for dealing with the true positives anyhow, so it doesn't really matter: nothing I miss is going to cause me to lose a hojillion dollars or anything serious.


Aside from some special-interest forums, I've ended up reading just HN.

(Warm thanks to you guys for being my filtering function.)

One by one, I dropped following various news sites from my Mozilla toolbar. I must say I never really bought into the RSS boom as the best sites I read were pretty simple and looked much like a beautified RSS layout.

First went Slashdot. This was years ago, about the time Reddit started. Reddit lost it last year: too congested with useless links with a low signal to noise ratio. I've been dropping many of my national news sites, too -- tech news and otherwise -- leaving only those of best quality. Then I dropped them as well because nothing much in the mainstream doesn't really interest me. This was a big thing I had to accept. I really don't have to know the details of some crisis that is 5000km away no matter how civilized or educated that may be considered; if I start hearing about a major change in the given situation only then I'll look up for some more news.

Last spring I dropped reading the website of the main newspaper in Finland. The news were mostly uninteresting except maybe for some local things. I still check their news listing maybe once a week but I rarely read any news items. I realized that I'll hear about anything important from other people anyway. So I'm effectively using my friends and co-workers as a live filter.

TV and TV news I dropped 10 years ago.

So HN it is. Then, plus a few web forums / mailinglists but those provide a mix of both information and connections to people. People are usually more interesting than the latest "facts" of random subjects.


There is another issue: a lot of quality content is not available in the form of feeds: event feeds, shopping feeds, feeds of modern art or last episodes of LOST available on Hulu.

Thus, an interesting opportunity is extraction + feed reader. You first extract and format quality content, then serve it to a user. Few days ago I released a demo: Photo Reader. Go check it out: http://semabox.com

I am actively working in the area of "semantic news". Talk to me if you are in the same field.

Yury Lifshits yury@yury.name


If you want the most recent episodes of LOST, this should work:

http://bit.ly/gJtxL

It may not be so useful right now, since the season is over.

DISCLAIMER: I currently work for this company (www.truveo.com). We run a video search engine.

AFAIK, Hulu doesn't provide episodes of LOST; they're only available on ABC's website.


Cool!

Does Truveo have well-formatted results in XML (title-link-thumbnail-embedcode)?


Yes, I believe so. We use Yahoo's media RSS (MRSS) standard for the feeds on the site: http://video.search.yahoo.com/mrss

You should always get title/link/thumbnail/source, plus embed code and other metadata (description, tags, runtime, etc.) if it's available


That's a pretty interesting point. I used to use Yahoo Pipes to make my own DIY feeds from sites that didn't have them but it ultimately ended up being too much of a hassle so I stopped. I would definitely be interested in something that solves the problem a little more directly.


Sooner or later you'll have to create Your Personal News Agency. A reader or any aggregator is just the INPUT part of the job but how do you process and make available the generated knowledge, the OUTPUT? In a wiki or a blog post? None of them are semantic so inapropriate for storing knowledge.

Future Readers must focus on the output part of the process otherwise there will be so much input without proper output that everything will look like noise.

Signal vs. Noise is in fact your I/O rate.


When reading information on the web or through my feedreader, newsbeuter, I use Zim (http://zim-wiki.org) to take notes on whatever I think I may want to refer back to in the future. I place notes in an appropriate category, apply a datestamp, and optionally add some tags.

Zim automatically indexes everything, so when I need to find out how to apply that cool Rails trick I read about 3 weeks ago, it's easily found.

So, yes, output is very important, and taking notes helps me get a lot more out of my online reading than if I simply tried to make my brain act as a sponge.


I started using Snackr a few months ago and have been extremely pleased with it. It shows items from a list of feeds I can setup, but the items are displayed randomly. It has a ticker interface, no unread counter and a nice clean interface.

The only drawback? Development seems to have died on the project which sucks. So I'm just going on with the last stable release which creaks along


I've written a similar URL-matching-between-feeds system for my own use: http://crowdwhisper.com/crowd/1/items (the output is also fed to Twitter: http://twitter.com/crowds)

The code is in Rails, and could use some bugfixing, refactoring and general TLC :) If anyone is interested, email niryariv@gmail.com for SVN access.


One of my biggest problems with Google Reader is those annoying unread counts. It makes it feel like email you have to get through ... so I quit Google Reader about a year ago, and started using Twitter as my new "Feed Reader". A stream you can check out when you want, and ignore when you don't want, is really nice. Plus, it's easier to whip through 140-character blurbs than paragraphs of text.


To solve your problem: in Google Reader, click the dropdown in the Subscriptions pane and click "Hide unread counts".


Nice tip, thanks.

You'll probably want to also do this for the "All items" section as well, that'll take care of the unread count in the page's title.


Ah, I never noticed that :-)


I keep pruning, but I haven't been able to get away, partly because I read things that never show up anywhere else, plus it's still the most efficient mechanism of consuming information, at least for me, since you can crunch through full text posts in one location. That said, other sources, notably Friendfeed and aggregators have resulted in the removal of all feeds where the good stories end up bubbling to the top there.


This logic makes no sense to me. To me the whole point of a feed reader is to not miss items. So you're saying you went to Twitter so you could "ignore it when you don't want" but that means you never needed a feed reader in the first place.

I mean, if you don't care about missing important items and Twitter works for you that's great. But to me it doesn't solve the problem feed readers are trying to solve.


I don't think that's the original problem at all. the original problem was simply aggregating a bunch of feeds into one page. I use my feed reader a lot and have no problems with using the "mark all as read" button every so often.


I also use Twitter as a sort of malformed feed reader. In fact, I blogged about it a bit ago, so in lieu of writing something new: http://scigrad.blogspot.com/2009/04/science-twittering.html


I would agree there is not much point subscribing to TechCrunch, Mashable et. all, But there are other types of content, like:

- friend's blogs

- vanity/competitor searches

- Niche blogs that you follow

[Warning: blatant plug coming] I use http://friendbinder.com to follow my friends RSS, Twitter,Facebook etc. in the same place. I don't use a feed reader for big news sites anymore - they are just too noisy.


RSS seems well suited to feeds that update less than once a week, since I'll probably forget to check. For example my blog posting rate is about 1 every 2 months.

If a site updates at least once a day I'll probably remember to check it.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: