Hacker News new | past | comments | ask | show | jobs | submit login

My testing showed otherwise but I'd love to see what you've done. What sort of equipment did you use, what kind of network, and what how many IOPS did you see?



SuperMicro has their IOPS optimized Ceph storage SKU's, that is what was used. Looks like they have updated since we purchased:

https://www.supermicro.com/solutions/storage_ceph.cfm

We went for upgraded network capacity though, 20 Gbit/sec cluster backend, 20 Gbit/sec cluster frontend...

12 * 8 TB drives, with 800 GB NVMe for the Ceph journal. Fast, large Ceph journal was key.

Total installation was about 3 PB raw, that is 1 PB useable with replication size 3. 33 Ceph OSD nodes, 3 Ceph monitor nodes and Juniper low latency switching using the QFX5100.

Full IPv6 network on both frontend/backend. 11 nodes per rack, each rack being it's own /64 routed domain. 3 racks.

I'm no longer doing contract work for the company, but last I heard they were expanding it out to 6 racks with an additional 3 PB raw capacity added on because of growing datasets.

It's an OpenStack cluster that is connected to this Ceph cluster, 40 Gbit/sec storage backend network, with 40 Gbit/sec front-end that VM's have all their traffic on. So storage and standard traffic don't mix.

The performance and IOPS even virtualized were enough that the entire company is moving their bare metal databases to VM's. I am unable to disclose IOPS or Oracle database performance due to contractual obligations unfortunately.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: