Hacker News new | past | comments | ask | show | jobs | submit login

CloudFlare has an approach that may scale to the levels you are looking for; rather than storing the logs, they analyze and rollup the expected responses in realtime, and store additional detail for items that appear anomalous. John Graham-Cumming performed a talk on this topic earlier this month at dotScale:

http://www.thedotpost.com/2015/06/john-graham-cumming-i-got-...

Here is the related HN thread: https://news.ycombinator.com/item?id=9778986

-G




If you have no working model of what is/is not correct how do you determine anomalous responses?

To Expand:

This method is already used in data logging compression (slightly) where one stores channel delta's/time stamps. Reconstructing the value ad-hoc when necessary. This is a good way to compress non-violalitle datasets.

I've actually watched the talk already. And while it seems to apply the problem is it doesn't. Every data point is important, because the real problem is comparing different tests, with time between tests to attempt to get an idea of how hardware ages. Or to test componenet swaping, where a known test is performed on several different items and in post processing the results are compared. To use the suggest method your storage solution requires knowledge of whats being stored.

:.:.:

The goal is to unify these storage solutions, and present a unified front end for querying/report generation.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: