Hey HN — Segment PM on the project here! Happy to answer questions.
Under the hood we're using NSQ as a queuing layer, S3 for storage and batched uploads, Amazon Aurora (for S3 indexing), DynamoDB for billing and metadata storage, and several distinct Go services that handle batching, transformation, schema updating, deduplication and internal consistency checking.
It's been in beta for several months and we're loading about 10,000 events per second into customers' databases today.
Scrolling down the posted page you can see the details on loading latency: it ranges from daily loading on the free plans to 30 minutes on the business tier.
What about companies that have additional data sources? For example, a CRM. It seems like it's hardly a data warehouse if it's _only_ event data. Or am I missing something?
Sure! We batch together data to load into warehouses by time and a few other properties. Usually, to figure out what objects are in S3, you have to issue an S3 list objects command. That operation tends to be relatively slow, especially if there are many objects.
Instead, when we put a new object, we update a table in Aurora which tracks all of the relevant objects. That way, we can query information like "what objects were uploaded in a certain time range" very quickly.
We have a worker consuming S3 events and updating the index. On a related note, we have experimented with Lambda to do something similar; AWS has done a fantastic job integrating their products :)
I think that this is direction of analytics and we'll see products similar to this one in the next few years. The analytics companies realized that they can't answer all the questions their customers ask so they started to add this kind of features to their products, just look at custom applications of Mixpanel, Redshift integration of Amplitude or S3 integration of Keen.io.
The main reason that these companies implement these features to their infrastructure is to provide an alternative way to analyze data within their product in order to prevent losing their existing customers that need more advanced analytics features. (They are usually the biggest paying customers) The funny thing is that when you have an analytical database combined with a stream processing application, you can ask almost all questions you want to ask and get answers you need quickly enough so the value of their core product becomes less valuable when you have this alternative way.
I think that the BI tools such as Periscope and Mode Analytics realized this and started to promote their products as an analytics product rather than an application that creates charts from your data.
[Shameless plug]
I'm also working on an open-source analytics platform (https://github.com/buremba/rakam) that collects data from clients (web, mobile or a smartwatch, doesn't matter), transforms (ip-to-geolocation, referrer extraction etc.) and stores data in a database that you specified. (currently there are two alternatives: Postgres and an in-house big data solution that uses PrestoDB as query engine)
Then, you to execute SQL queries, pre-aggregate your data for fast reports with continuous queries and cache query results with materialized views. Once you have these features, you can perform all analytical queries such as funnels, retention, segmentation etc. and create your custom analytics service easily.
This is a smart move by Segment, since the industry has been moving in this direction. Looks like mParticle launched support for redshift a few months ago:
It definitely may be the same designer; who knows. It's just pretty strange that so many of the design elements of Stripe are 100% reflected on this companies page.
I definitely agree that Stripe's design is beautiful, but this is just too close for me. I think it would be awesome if they built upon Stripes beautiful interface and built something nicely different!
Hey there! You’re right, we launched Redshift to our enterprise customers last November. There are three big changes today…
1) We’re lowering the price and opening it up to all our customers to make it more accessible
2) You can bring your own database. This is helpful for customers who already have a data warehouse and want to load Segment data into it.
3) We now support Postgres, in addition to Redshift
No problem! They're not yet on our near-term roadmap, would you mind submitting a request for your specific database here and we can keep you updated? https://segment.com/contact/integrations
It wasn't immediately clear to me that I had to bring my own database. That makes this product less appealing. I would love it if I could instrument Segment and get a redshift endpoint to query my data, but didn't have to set it up myself.
I love how the pricing is clearly value based and not cost based. I can't imagine that this is massively more complex, but the price is significantly higher (and it's fine), basically, more enterprise-y. I love it!
Wondering if/how that will impact their bigger integration plans that includes a feature to replay data for new integrations you add after the fact.
Under the hood we're using NSQ as a queuing layer, S3 for storage and batched uploads, Amazon Aurora (for S3 indexing), DynamoDB for billing and metadata storage, and several distinct Go services that handle batching, transformation, schema updating, deduplication and internal consistency checking.
It's been in beta for several months and we're loading about 10,000 events per second into customers' databases today.