Hacker News new | past | comments | ask | show | jobs | submit login

Can you elaborate on this point please? My reading of the article and docs is that perf is expected to scale linearly with the number of nodes (and therefore with dataset size).



Let's say you get 50 writes / sec per node. You need what, 1000 nodes to get to the playing fields of Postgres, with a simpler and cheaper setup ? Right now it's really not competitive, unless they improve the performance 1000 times. It makes no sense to buy and administrate many more machines to get the power you can have with one machine on another, proven tech.


> Let's say you get 50 writes / sec per node.

If the DB can only handle 50 ops/sec then the point you are making here is valid.

But see https://news.ycombinator.com/item?id=13661349, that's a pathological worst-case number. You should find more accurate performance numbers for real-world workloads before making sweeping conclusions.

Your original comment was:

> Even if they manage to multiply this by 100 on the final release, it's still way weaker than a regular sql db

This is what my comment, and the sibling comments, are objecting to, and I don't think you've substantiated this claim. 100x perf is probably well in excess of 10k writes/sec/node, which is solid single-node performance (though you'd not run a one-node CockroachDB deployment). Even a 10x improvement would get the system to above 1k writes/sec/node, which would allow large clusters (O(100) nodes) to serve more data than a SQL instance could handle.

Obviously I'd prefer to be able to outperform a SQL instance on dataset size with 10 nodes, but for a large company, throwing 100 (or 1000) nodes at a business-critical dataset is not the end of the world.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: