Not hard at all. We actually had that in early versions but removed it because it's really hard to optimize against functions, and you easily break the hub by injecting long running loops.
Plus queries are the power of pubsub.io. Queries can be indexed and compiled on the fly, so hubs scale very well.
If you really need funcions, you can fork the project and I can point you to how to add them.
I'd love to see functions too, and am not sure your concerns are necessarily dealbreakers. Optimization should be easier if you use something like node-burrito for the low hanging fruit, and node.js developers are already aware that long synchronous loops should be avoided in a single-threaded environment.
I think you convinced us to give functions a second look, as long as we can have a good story for auth, i can't see why we shouldn't add them, given the right optimizations are in place. They could be very powerful. https://github.com/pubsubio/pubsub-hub/issues/3
I believe you are. Its not backup, its sharing. One stop file sharing versus having to choose sharing sites by file type e.g.[picture:twitpic,video:twitvid,vid-pic:flickr,pdf:scribid,slide:slideshare,...]
I recently used Dropbox to collaborate on an appengine python project with a friend and loved it. Since python is interpreted we didnt have too many issues with working on the same file at the same time (no compilation errors).
It kind of does an what e.ggtimer.com does but only for minutes (just type in the minutes and hit enter). I could add some visual indicator of the percentage time left. You can also link to a time like this http://www.ianjorgensen.com/timer#10 or http://www.ianjorgensen.com/timer#0 to count up.
Knowing that somebody actually read/watched/listened to... something is very valuable, and I havent seen anybody create a reading list that leverages that data.
I dont consider an rss reader as a reading list, i consider a reading list as a hand picked list.
We need a digg/instapaper and i think donefeed can be it.