Hacker News new | past | comments | ask | show | jobs | submit login

One of the biggest quirks that I had bumped up against with TimescaleDB is that it's backed by a relational database.

We are a company that ingests around 90M datapoints per minute across an engineering org of around 4,000 developers. How do we scale a timeseries solution that requires an upfront schema to be defined? What if a developer wants to add a new dimension to their metrics, would that require us to perform an online table migration? Does using JSONB as a field type allow for all the desirable properties that a first-class column would?




90M datapoints per minute means 90M/60=1.5M datapoints per second. Such amounts of data may be easily handled by specialized time series databases even in a single-node setup [1].

> What if a developer wants to add a new dimension to their metrics, would that require us to perform an online table migration?

Specialized time series databases usually don't need defining any schema upfront - just ingest metrics with new dimensions (labels) whenever you wish. I'm unsure whether this works with TimescaleDB.

[1] https://victoriametrics.github.io/CaseStudies.html




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: