Hacker News new | past | comments | ask | show | jobs | submit login

When you buy oem drives yes, when you purchase from high touch vendor such as PS and proprietary tech then no



That’s debatable when you consider the performance you get from Pure.

Our X20 which is on the small end, with only 10 drives can easily do hundreds of thousands of IOPs, supporting over 1000 VMs and a high performance ERP solution.

The whole thing costs us about 50k a year. On an AWS/GCP that would cost a lot, lot more.


Yes if you’re treading io1/2 territory it might be less expensive than cloud. Most people seem pretty happy with dogshit gp2/3 ¯\_(ツ)_/¯


Exactly.


> hundreds of thousands of IOPs

that's like number for consumer grade ssd which costs $100?


Sure, try running a thousand VMs on it though (which is an extremely random workload). A single sequential test at an optimal byte size benchmark for a consumer SSD is not representative of a real workload these arrays would see.

Doing so is a lot more complex and intensive than what a single consumer grade SSD can handle.

My point was just that if you wanted to get dedicated IOPs on AWS to match what you get with modern SANs, it’ll cost you far more.


it is random 4k iops: https://eu.community.samsung.com/t5/image/serverpage/image-i...

I tend to think your number for PS is likely off.


I get 2 to 2.5 million IOPS for 4k random-reads on my personal server.

Running Linux LVM software RAID over 3 x Samsung NVMe SSDs. That's not a read-write measurement, but it's a satisfying number for a not particularly high end server.

(I use it for a side project's database engine experiments. That level of IOPS supports a very high random query rate.)


100% Read or 100% Write are as far from the real workload as it can get. The only exception would be backups read/restore.

Also 1GB test file is often fits in the SSD's RAM cache. Get iometer, 50R/50W%, blocks from 512 to 16k, at least half the size of the storage. Then you would see the real performance.

NB if you have random read way below the random write means you are measuring anything but the storage performance.


I tried CrystalMark for my desktop SSD with 64GB 4k 50R/50W and got 134K, mostly I think because it has 400K read and 130K write, so writes are bottleneck.

Sorry, lazy to learn how to use iometer, but you probably have SSD too and can report your results.


NO U?

Okay, not a problem: https://imgur.com/a/teoPGrz

Real world performance[+Mix], Read&Write[+Mix]

One is Fujitsu DX200 S4 SSD SAN, other is HFM256GDJTNG-8310A.

Can you guess which is where?


you have q1t1 only.


you have no screenshot.


I don't intend to prove anything to you ), but I confident you can set q16t16 and will get much better results.


> I don't intend to prove anything to you

>> but you probably have SSD too and can report your results.

Uh-uh!

I intended to show you the difference between a single NVMe drive and a SSD SAN (a bit old, but still very performant to handle ~900 VMs).

Sure, I can just ramp up queue depth and see some magical numbers, but for me the real performance is in everyday tasks and running CrystalMark isn't an everyday occurrence.

Did you guess which one is SAN?


> (a bit old, but still very performant to handle ~900 VMs)

I am bit confused how random 4k access is relevant to your task, it should be more like 1MB seq access likely.

My everyday task is tuning heavy data processing pipeline, and I am trying hard to achieve those q16t16.

> I intended to show you the difference between a single NVMe drive and a SSD SAN

and why you have such intention? It is obvious there is a difference.


> how random 4k access is relevant to your task

Sorry? 900 VMs equals 100% full random access. There is no sequential access there, just as I said in my first comment.

> It is obvious there is a difference

Because of your comment[0].

This comment[1] pretty much summarized what I said in a more eloquent way.

[0] https://news.ycombinator.com/item?id=35061337

[1] https://news.ycombinator.com/item?id=35064282


> Sorry? 900 VMs equals 100% full random access. There is no sequential access there, just as I said in my first comment.

it depends on workload, if they do most of the work in RAM, and most of fs traffic is snapshoting and restoring from snapshots, then you will get 99% io seq traffic. If they do some non-trivial fs operations, then you will get q16t16 io traffic. It is very unlikely you will get q1t1 random.

> Because of your comment[0]. > This comment[1] pretty much summarized what I said in a more eloquent way.

In my view you are jumping from topic (single ssd vs nas) to another topic (your speculations about benchmark not representing real world scenarios) and then back.


In what way? It can do more or less?


likely much more: scale with ssd speed and number of disks.

Also, I am not sure how it will stack up against some cloud instance with bunch of ssd under software raid.


I agree that if you ran a synthetic benchmark against a brand new pure x20 with 10 NVME drives you'd see astronomical numbers for iops.

That is absolutely not representative of a real workload of mixed reads and writes, different block sizes, potentially different queue depths, all coming in on hundreds or maybe thousands of different volumes.

A single consumer Samsung SSD would hilariously crumble under a real workload, it will NOT deliver hundreds of thousands of IOPs in that environment.

Your benchmark screenshot is the equivalent of showing your pickup truck can do burnouts in the parking lot, and extrapolating that to think it could keep up with a Ferrari on the Nurburgring.


> That is absolutely not representative of a real workload of mixed reads and writes, different block sizes, potentially different queue depths, all coming in on hundreds or maybe thousands of different volumes.

if you see bottleneck there, it could be that actual SSD speed is maybe irrelevant in your case since upstream software is not optimized.


You should definitely start a storage company :-)


I am actually trying to get into dataprocessing.


Also compared to EMC storage. I haven't used since 2015, but they were very, very proud of their storage tools.


We switched from EMC to PS for our on-prem stuff about 5 or 6 years ago. I sat in on some of the RFP process. PS was basically the same price for a vastly superior offering. You had to make a leap of faith, though, that PS would survive as a company.

Now there’s no doubt, so they probably have more pricing power. And EMC most likely has an offering that competes with the performance (or at least as close as they can get). Back then it was flash vs spinning disks.

We do most of our stuff in the cloud, BTW, and PS is faster for cheaper, by a wide margin. But we’re not going to bring that compute back (I’d like to).

One thing I remember was them being very open and direct, and they wanted to know a lot of heuristics about our data so they could give an accurate estimate about speed and dedup. Which was really close to the reality. Oh, and we’re using one of their devices for an Itanium OpenVMS cluster, it works like a charm. Try getting a startup to support that kind of setup these days, lol


it can go either way and it entirely depends on your use case and what you need

4PB of cloud storage is probably a bit more than $800,000k and then you still have data "in the cloud" and not local to whatever you're doing with it.


Agreed. If you’re egressing a lot to other sites or have elevated iops reqs then cloud may still lose




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: