I used graphite and now I'm using influxdb, in the other hand kibana+logstash+ES.
With statsd and influxdb you can measure all the events in a database, it's pretty easy and you have statsd libraries in some languages. I measure all the events in my products, from response timings, database queries, logins, sign-ups, calls, all go to statsd.
Logs are good to debug, but if you want to measure all events in your platform, statsd+influxdb+grafana are your best friends, and your manages will be happy with that ;-)
A few weeks ago I gave a talk about this, you can see the slides here + a few examples+ deploy in docker:
So in Heka set up an input (LogStreamer for example), and use that haproxy decoder script (as SandboxDecoder) with it. After that pass it through any filters you'd like (like StatFilter) and output collected stats with HTTP output + influxdb encoder.
(I just built a log parsing and stats collecting pipeline for our Heroku apps with haproxy + heka + influxdb + grafana. So far happy with the result.)
I used graphite and now I'm using influxdb, in the other hand kibana+logstash+ES.
With statsd and influxdb you can measure all the events in a database, it's pretty easy and you have statsd libraries in some languages. I measure all the events in my products, from response timings, database queries, logins, sign-ups, calls, all go to statsd.
Logs are good to debug, but if you want to measure all events in your platform, statsd+influxdb+grafana are your best friends, and your manages will be happy with that ;-)
A few weeks ago I gave a talk about this, you can see the slides here + a few examples+ deploy in docker:
http://acalustra.com/statsd-talk-at-python-vigo-meetup.html
Regards ;-)