Hacker News new | past | comments | ask | show | jobs | submit login
An HN style social news site written in Ruby/Sinatra/Redis/JQuery by Antirez (github.com/antirez)
145 points by smn on Oct 20, 2011 | hide | past | favorite | 57 comments



Perfect timing as I'm planning to use Redis for the first time soon and this is going to be a great resource for best practices. Thanks, Antirez!

Speaking of best practices, putting serialized objects into the database (as you're doing with the comment objects) is usually considered an anti-pattern in the SQL world. Could you briefly explain (beyond what you said in the Readme) why you've used this approach for comments, but not for, say, news objects? Was this decision solely made to work around anticipated performance/memory bottlenecks and if so, what are the trade-offs, and are there any rules of thumb for making the same decision for object types in other applications?


Hello julian,

the difference between comments and news is that a thread (a collection of comments for a given news) is an hash made of sub-hashes, the hash is ID -> comment_hash. The comment hash just contains the different fields.

When there are this two levels, storing the first level as a Redis hash, and the second as JSON leads to very good memory usage performances, it is internally stored as a linear array. We can do that because we know most news will have just a few tens of comments. If there are more, the hash will turn into a real hash table transparently, more space, but worth it for the rare cases when this is needed.

The news instead is just a collection of fields. There is no outer object that is reasonably sized, like "all the news obeject" (it is too big), so there is no gain in using this approach.

What is good about storing sub-objects of hashes as JSON objects is that Redis unstable just got JSON support in Lua scripts, so it will also be able to manipulate this objects sending Redis small Lua scripts.

I hope this clarifies the issue.


Hi Salvatore,

many thanks for the reply, that does clarify the issue.

So when sub-objects are only a few dozen per collection (per parent object) on average, storing them as JSON blobs allows Redis to "inline" them, yielding good memory usage. But for large numbers of items per parent, Redis can't inline them so you might as well represent them as a hash, which can never be inlined. Is this a fair summary?

I guess that ideally, all objects would always be represented in the same way from a client perspective, and the database engine would decide which internal format is best suited for storage, maybe using hints provided by the client, and handle any necessary (de)serialization as an implementation detail. That said, I know Redis is still a young project and I suppose this is something you guys are thinking to improve long-term anyway.

Cheers!


What Antirez is referring to is the fact that when a Redis hash is fairly small, it is stored as an array that is scanned linearly (worse time complexity, but lower memory usage), and is only "upgraded" to a true hash table when there are many key/value pairs. (This is mostly an implementation detail, however - from an API standpoint, you interact with hashes the same regardless of what internal representation they are using.)

In both cases, Lamernews still stores comment IDs as the hash keys and the JSON-serialized structures as the hash values, antirez was just commenting on a memory-saving feature of Redis for small hashes.


Thanks for chiming in, but now I'm confused. Antirez said:

  When there are this two levels, storing the first level as a Redis hash, and the second as JSON leads to very good memory usage performances, it is internally stored as a linear array.
I'm reading this to say that memory usage wouldn't be as good if the second level was stored as a hash instead of a JSON blob.

My specific question was: why store the comment as a JSON blob as opposed to a hash? I think Antirez answer was, in a nutshell: because a hash of hashes can't be stored as an array. For news items it doesn't matter because the hash couldn't be stored as an array anyway due to the (anticipated) large number of news items.

Are you saying this interpretation is wrong?


> worse time complexity, but lower memory usage

While the time complexity is worse, absolute performance might be superior, due to both skipping the hashing arithmetic, and (due to that lower memory usage) fitting more data in the CPU cache.


> Don't use templates, they suck.

I'm curious, why do templates suck?


What do you do when you get more data than fits into ram?

I've been working on a 'for fun' application with Redis, and had been counting on some sort of on disk memory.

(Because storing everything in ram is comparatively expensive, I had figured I could use Diskstore or VM with a small server to start).

Now I'm considering porting over to a mongodb or sql backend because the disk based storage options won't be supported in Redis in the future.

What should I do? Simply Use Redis as a cache? Buy RAM? Doesn't that seem to limit its utility and complicate things?


What do you do when you get more data that fits in disk?

You do the same with memory, either you buy more RAM, or you distribute the data across multiple hosts (that is what we are trying to do with Redis Cluster).

The whole point is, what order of magnitude is my data? Is memory a big enough storage for my needs?

For instance for Lamer News you can store many millions of news and comments in 1 GB. So it makes perfectly sense for this application. For other applications you want a mixed solution where fast meta-data is stored on Redis and larger documents on disk in some other database. For other apps you need Cassandra and nothing else, and for other apps a good SQL database. It depends, as usually :)


The two attempts at disk storage so far - VM and Diskstore - are both based on the concept of RAM as the primary datastore and the disk as merely auxiliary storage. However, antirez is a perfectionist, so he scrapped both of them when they didn't work as well as he had hoped.

In the long term (i.e. after cluster is finished), there are plans to manipulate data structures directly on disk (an incredibly elegant solution, and also really good for SSDs). Though in the short term, you are right in that RAM is far more expensive than disk, and this does limit the utility of Redis as a primary datastore.


So... what's going to happen when lamernews runs out of RAM?


I think it is actually pretty obvious what /happens/ when it runs out of RAM.

You have to either buy more RAM, start using the VM option, or start using OS virtual memory. The latter two will slow it down a bit.

Another possibility is to figure out which sets of data are taking up the most space, and offload them to a separate disk-backed DB, and just cache the most frequently accessed subset of that data through Redis (this could be easily done through key expiration). I find this option kind of painful, mostly because I really like the way that Redis approaches data, and using another database is comparatively inconvenient.

Having worked with it a bit, I think Redis is a completely awesome piece of technology, and in practice it is perfectly good to use for something like this, at least in the short term. I don't want to bring down Antirez's hard work in any way.

Being the type of developer that hems and haws over corner cases and unlikely what-ifs, however, I felt the need to ask, as it has been in the back of my mind for a while.

I am encouraged that there will be a version that manipulates the data-structures on disk, as that was the best solution that I could think of as well. (Basically you could then run two instances of Redis, a RAM based one and a disk based one, echoing commands to both, but allowing data on the disk backed Redis to expire and taking responses from whichever returns an answer first).

My only hope is that clustering won't take to long and that it won't be abandoned in favor of a different castle in the sky if it doesn't turn out to be perfect.


That's pretty mind-numbingly self-evident, isn't it? The kernel kills Redis and the site is down.


depending on your settings. the kernel could also page out some RAM, which early reports suggest isnt so bad on a high-end ssd.


Isn't that more or less the exact use-case for Membase? Or does Redis offer a lot that Membase doesn't when the data does all fit in memory?


I don't know if the creator is going to see these comments but lamernews.com goes to a GoDaddy parked domain page. Not sure if DNS changes just need to propagate or what.


Hi, this is just the first public release. Hope to get more features inside and more testing before actually installing the first version into lamernews.com.

The code base got just a few days of after-work hacking, so probably this will take a few more days to be ready.

Thanks for the hint.


Just noticed you're using a single pass of SHA1 (salted) for password storage. Would you be opposed to a patch using a safer password storage mechanism? If not, I'll throw one your way in a couple hours.


if what you are thinking about is to use blowfish or other algorithm with a slow key scheduling step, what about if we just reiterate N times SHA1? Should be exactly as secure, like in:

SHA1(SHA1(SHA1(0|pass)|1)|2) and so forth.

This way there is no requirement for an additional library.


I would strongly recommend that rather than doing that, go with standard PBKDF2. In essence, HMAC(HMAC(HMAC(...(password)))) with a per-user salt. I generally recommend 10k+ rounds with PBKDF2 (each one is cheap). This wouldn't give you an additional dependency and is super easy to put in place -- I'll do it, if you want.


If you concatenate your password and a salt, then iterate a cryptographic hash like SHA256 a few thousand times, this is almost exactly PBKDF1. It's a good, respectable password hashing scheme. The main advantage that PBKDF2 offers is the ability to produce arbitrary output sizes.

(The difference between this and PBKDF1 is that PBKDF1 requires using either MD2 or SHA1 as the hash function, and hasn't been updated to reflect the availability of SHA-256 and SHA-512.)



Note that PBKDF1 is exactly that, vanilla chaining of the same hash function. And AFAIK is not believed to have known attacks, with the only drawback being the fixed output size.

I guess that the xor approach used in PBKDF2 is useful since you want to compute T1, T2, T3 ..., Tc that are multiple output blocks all starting from the same input, so this gives more information to the attacker and the schema is designed to avoid showing some "state" that is possible to more easily analyze. Not sure, but the point is, I don't think PBKDF1 is unsafe either, and it is just chaining.


Why pass up an incredibly well testsed tool that will hands down be more secure than anything you can write on your own?


It's very much up for debate how much bcrypt, scrypt, and friends buy you over iterated HMACs like PBKDF2, which can be implemented in just a few lines of code. While I'd recommend the use of bcrypt or scrypt for most purposes, I personally have no qualms recommending something like PBKDF2 with a large number of iterations -- it's pretty damn sound. Iterated SHA1 isn't much worse than any of them, but there are a couple optimizations you can make while brute-forcing them, which you can't make with iterated HMACs, so I'd recommend going with that route.


daeken thanks for your knowledge. We should follow your example of understanding how things works instead of repeating "use bcrypt".


I agree, but we could do worse than to have "bcrypt" be the lay developer's default answer. All that is is annoying, whereas "secret salt construction using Whirlpool as the hash" or- whatever- craziness is actually dangerous.

Your code is neat. I don't care what hash you use. I'm just saying.


If iterated SHA1 and bcrypt are (all other things equal) equally secure, but you're more likely to mess up while implementing your own iterated SHA1, then bcrypt is more secure.


Use bcrypt, if for no other reason than that your code is likely to be forked by people who don't want to understand how things work.


Try

SHA256(SHA256(... SHA256(password + salt)))

With perhaps five thousand iterations of SHA256, if you really don't want to depend on bcrypt. Or just use bcrypt; it's a good library, and removes the temptation to get creative with password hashing.


Do not do this, repeated SHA'ing is not cryptographically secure: https://github.com/antirez/lamernews/pull/8#issuecomment-247...


Oh really? If you use SHA1 then the scheme I just described is a pretty well-established method called PBKDF1, specified in RFC2898:

http://tools.ietf.org/html/rfc2898#section-5.1


"PBKDF1 is recommended only for compatibility with existing applications since the keys it produces may not be large enough for some applications." — RFC 2898, September 2000


The keys it produces are big enough for this application. PBKDF2 can produce output keys of arbitrary size, which is why PBKDF1 got deprecated, but it's not always necessary.


I agree completely... I was hoping to see a live version. Live versions can help OSS spread and generate hacker interest.

However, it should be noted that I'm thankful that the author decided to share this.


Hi Jon! here it is almost the contrary, I wrote the code especially to run it as a real service. It is just that it needs a few more days.

But actually... I can just run it easily in a server of mine just to show it to you. Let's try to install it... just a moment.


Here we are: http://178.79.145.225:9999/

I'll leave it running for a few hours assuming it will not crash. It's very new code ;)


Readable fonts, thank you.


Here it is running on Cloud Foundry:

http://lamernews.cloudfoundry.com


Hi Antirez,

Looking at the source, you use side-effects to return user objects and such. I've always been taught side-effects are a bad thing that create complex code, which seems to go against one of the goals listed in your README.md.

  # Try to authenticate the user, if the credentials are ok we populate the
  # $user global with the user information.
  # Otherwise $user is set to nil, so you can test for authenticated user
  # just with: if $user ...
  #
  # Return value: none, the function works by side effect.
  def auth_user(auth)
      return if !auth
      id = $r.get("auth:#{auth}")
      return if !id
      user = $r.hgetall("user:#{id}")
      $user = user if user.length > 0
  end
Can you explain why you used side-effects?


What I like most about this is that it is a non-trivial, non-toy example of using Redis effectively. Thanks antirez!!


Great stuff, thanks antirez. I like it when creators of libraries build a complex system to eat their own dog food. It helps me visualize use cases other than simple 15-minute blog projects.


Thanks this was one of the main goals, to have a non trivial example, and even to put it into production.

It is very helpful for me to vest the clothes of the user, it makes me understanding a lot more about Redis. If you are always in the other side of the table you miss a lot.


> Don't use templates, they suck.

This is surprising. Why not make minimal use of ERB or Mustache?


It seems like they just reimplemented a custom HAML subset.


looks more like CGI::HtmlExtension in the ruby's stdlib but with hash as params


As I've already mentioned to antirez, the only suitable mascot is this - http://www.google.com/search?q=llama&hl=en&tbm=isch&...


Join us in #lamernews on freenode, also if antirez reads this you're welcome to ops.


This is really cool! I just setup my own deployment of this at rubynews.heroku.com :) I've fixed some styling issues and I'm thinking of tweaking a few things... I may contribute to this on github. Thanks!


This should be the new "Hello world" for web-oriented languages and frameworks. I'd love to see this ported idiomatically to different languages.


Your site lamernews.com is showing godaddy adverts.


It would be great if somebody added some tests.


yeah... I wonder where are all the tests :)


Redis seems like a poor choice if you want the comments and stories to stick around for a while. It's designed to be an in-memory database...


Redis is perfectly fine for this application, both from the point of view of data persistence since AOF is very durable, and from the point of view of space needed. This application is designed to hold a lot of news and comments even using little memory.


> This application is designed to hold a lot of news and comments even using little memory.

Can you post some benchmarks on that? Would like to see how many (average sized) news/comments can be stored on the 512mb VPS.


Considering the creator of lamernews is also one of the creators of redis, he probably knows what hes doing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: