One of the biggest quirks that I had bumped up against with TimescaleDB is that it's backed by a relational database.
We are a company that ingests around 90M datapoints per minute across an engineering org of around 4,000 developers. How do we scale a timeseries solution that requires an upfront schema to be defined? What if a developer wants to add a new dimension to their metrics, would that require us to perform an online table migration? Does using JSONB as a field type allow for all the desirable properties that a first-class column would?
90M datapoints per minute means 90M/60=1.5M datapoints per second. Such amounts of data may be easily handled by specialized time series databases even in a single-node setup [1].
> What if a developer wants to add a new dimension to their metrics, would that require us to perform an online table migration?
Specialized time series databases usually don't need defining any schema upfront - just ingest metrics with new dimensions (labels) whenever you wish. I'm unsure whether this works with TimescaleDB.
We are a company that ingests around 90M datapoints per minute across an engineering org of around 4,000 developers. How do we scale a timeseries solution that requires an upfront schema to be defined? What if a developer wants to add a new dimension to their metrics, would that require us to perform an online table migration? Does using JSONB as a field type allow for all the desirable properties that a first-class column would?