Hacker News new | past | comments | ask | show | jobs | submit login

Why is your storage 10X that of bigquery? How does your compute price compare to bigtable?

Edit: bigtable->bigquery




I don't bill separately for the compute vs storage the way bigtable does. The pricing per GB of data is inclusive of compute. The goal is for pricing to be modeled similarly to DynamoDB - just pay for what you use. The other way I charge is by query wall time - so a 30s query will cost you more than a 500ms one.

I haven't used bigtable but it seems like the minimum charge is on the order of $300 before you have any data. With ScratchDB, the minimum charge is $10 for 30 GB.

Additionally, on average, data has a 25% compression ratio. So if your 1 TB of data only takes up 250GB, you only pay for that.

Bigtable isn't OLAP, so you would not use them for the same data. This competes more directly with GCP's BigQuery.

Finally, I'm interested in pricing feedback! The goal is to be able to sustain the development of this, so I want to do what makes sense.


I wrote wrong, I meant bigquery, sorry. It's 9x of bigquery.


I haven't calculated the break-even point for bigquery vs scratchdb. It would be impossible to do so, as I'd need to know the number of rows bigquery will scan in order to do an estimation in advance. Also what is a "slot-hour"?

That is why I chose pricing uses units of "GB" and "hours" for storage and compute - those are things you can more easily observe.

It is a good question, though, and perhaps I can do an experiment and write a blog post using example data showing the differences. I might be surprised at how efficient bigquery is!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: