Time series does not necessarily have to be about 'huge' data either, just a much greater level of historical precision. Example:
ISP sells a circuit with 95th percentile billing to a customer.
If you poll SNMP data from a router interface on 60 second intervals and store it in an RRA file, you will lose a great deal of precision over time (because RRAs are highly compressed over time). You'll have no ability to go back and pull a query like "We want to see traffic stats for the DDoS this customer took at 9am on February 26th of last year".
with time series statistics you can then feed it into tools such as grafana for visualization.
An implementation such as openTSDB to grab the traffic stats for a particular SNMP OID and store it will allow you to store all traffic data forever and retrieve it as needed later on. The amount of data written per 60 second interval is miniscule, a server with a few hundred GB of SSD storage will be sufficient to store all traffic stats for relevant interfaces on core/agg routers for a fairly large sized ISP for several years.
ISP sells a circuit with 95th percentile billing to a customer.
If you poll SNMP data from a router interface on 60 second intervals and store it in an RRA file, you will lose a great deal of precision over time (because RRAs are highly compressed over time). You'll have no ability to go back and pull a query like "We want to see traffic stats for the DDoS this customer took at 9am on February 26th of last year".
with time series statistics you can then feed it into tools such as grafana for visualization.
An implementation such as openTSDB to grab the traffic stats for a particular SNMP OID and store it will allow you to store all traffic data forever and retrieve it as needed later on. The amount of data written per 60 second interval is miniscule, a server with a few hundred GB of SSD storage will be sufficient to store all traffic stats for relevant interfaces on core/agg routers for a fairly large sized ISP for several years.