Hacker News new | past | comments | ask | show | jobs | submit login
GraphCDN is now Stellate and raised $30M (stellate.co)
58 points by mxstbr on June 6, 2022 | hide | past | favorite | 35 comments



Definitely a good name change. "GraphCDN" locked the company in to basically one product (although an interesting one), while there are so many closely related ones it seems their customers have been telling them about as well.

(For a minute though I feared they pivoted into block chain and crypto though - somehow my mind made a connection for the new name there :shrug:)


Good blockchain caching use cases though: https://stellate.co/blog/scaling-hasura-at-rmrk & https://stellate.co/blog/super-fast-hasura-graphql-for-stxnf...

"Given that the Bitcoin blockchain only produces new blocks once about every 10 minutes, Gamma’s data also changes at most every 10 minutes — making it an ideal use case for caching.

Thomas added Gamma’s GraphQL edge cache in front of Gamma’s Hasura GraphQL API, and immediately got an overall cache hit rate of 87%, which corresponds directly to a decrease in traffic to their Hasura instance.

Even better, their two most highly requested queries are sitting at a 97% and 92% cache hit rate, respectively"


GraphQL is just HTTP, why isn’t a caching proxy like varnish enough?

blockchain use-case is just poor to say the least


Our GraphQL Edge Cache is based on Fastly's infrastructure, so we do use Varnish under the hood for the cache storage. In order to properly support GraphQL at the caching layer you have to understand GraphQL so you can do fine-grained invalidation.[0]

Essentially, our GraphQL Edge Cache is similar to a GraphQL client. It looks at the request body with the query & the response with the typenames and tags the cached query result with all the objects contained within. E.g. a getBlogPost query that fetches the blog post and its author will be tagged with BlogPost#asdf123 and User#gjkd489. Then we can invalidate whenever those specific objects change.

[0]: https://docs.stellate.co/docs/purging-api


how many businesses are having the problem you’re solving? is this number greater than 10?

aren’t you scared Fastly and Cloudflare will just clone your moat once they see some demand for a service like yours?


Akamai already solved this https://developer.akamai.com/blog/2019/06/18/announcing-grap...

But looks like they will build more out of the box integrations.


The number is actually greater than 1,000 right now. No, we're not scared of Fastly or Cloudflare and are actually cooperating with them. What makes you think that "they will just clone the moat"?


The question about how many people have the problem isn’t the same question as asking how many people you’ve managed to sign up, can you be clearer about what your number is referring to?

I’ve signed up for many more services than I actually need/use, and even amongst ones I use, there are certainly some solving problems I don’t really have.


Trying to make a global data graph seems to be an appealing problem to try and solve, but anecdotally seems to just result in an eventual acquihire rather than a successful product (even if they do have cool tech).

From the press release and supporting document, I’m not really understanding what this is all about, and why one should be excited about this pivot. What does the world look like with a successful version of this?

If I’m building a product that consumes multiple third-party APIs at various points in its lifecycle, I’m not seeing a compelling value in talking to one huge borg graph rather than surgically accessing specific APIs, other than maybe only having to manage one API key.

Can someone paint a cleaner picture for me of why this matters?


I work on a GraphQL-based e-commerce framework and several of my users have had good experiences integrating GraphCDN/Stellate into their projects.

Recently I started working with it too and so far the product has been great to work with. Moreover their team is very responsive. I had a technical question regarding my integration and one I was able to schedule a call for the next day with one of their team and to my surprise he'd prepared a script to solve the exact issue I was asking about (thanks Jovi)!

Congrats to the team!


how did your users benefit from GraphCDN? what problem did GraphCDN solve for them? have you considered other options?


Simple - the load on their origin servers is cut down significantly. And GraphCDN made it easy and painless to set up. I don't know of any other options that can do schema-aware caching in this way that is as user-friendly as Stellate.

As an aside - I notice you commenting cynically to almost every thread here. A dose of skepticism is fine and of course we should consider alternatives and technical limitations etc. But your tone and activity comes across more of having a personal vendetta against this company. Very strange.


Congrats on the announcement, and name change.

Considering the shift over the last decade to microservice APIs (including GraphQL here), we're now seeing a shift in lots of companies moving to become the gateway/orchestration layer. So it makes sense you guys move into this space. You've already caching, analytics, monitoring, etc. Now these services exist for all subgraphs!

I wish you guys the best of luck! As said in another comment, stellar team to make it happen.

PS. What would be interesting to see is some kind of "control plane" movement with Stellate. Give me a binary I can run that you generate based on all of my config.


My previous comment was a bit harsh. I didn’t Have coffee I apologize.

I still think that 30M is a bit too much for a problem that is already solved.

We will see what they are able to do with the funding.

Good luck to the team


This dismissive take is the biggest bull case for me.

GraphCDN is a killer product with an impressive business and a sharp team. Excited to see what's next for them.


what makes GraphCDN a “killer product”?

it’s just a proprietary caching proxy

“impressive business” is entirely subjective


the n+1 graphql problem is more difficult than most people realize. graphcdn makes it incredibly easy to globally scale a backend application with dynamically changing queries. of course, it _is_ possible to hack together some clever cloudflare workers (i've tried), but it gets increasingly difficult to actually have the cache hit if you have a ton of changing fields (with nested values). then, there's the sunk cost of managing/tweaking everything if you're a small team. I also really like how clean and useful the visualizations are.

it's not for every use case, but it's clearly useful for the thousands of companies that have signed up and are paying them. :)


"it's just"

This is seriously in every founders history / bio :) Dropbox / iphone etc etc.

More seriously, the play into the "global data graph" is going to be potentially somewhat appealing.


what is a “global data graph” and why do i need it?

Dropbox and iPhone are consumer products, so the comparison is not even close


The example they give is basically Zapier for data?

"For example, you might want to figure out if the customer who just submitted a ticket is a high-value customer by checking the matching subscription in Stripe's data and whether there are any associated high-value deals with that customer's company in Salesforce."

No guarantees on traction, but folks doing integration work have gotten a fair bit of use. So as always - this might fail.


A worker can't cache or rate limit itself, it runs before cache [1] so every invocation of the url counts as a credit (free tier allows 1000 runs per minute / 100,000 per day / 3,000,000 per month)

Stellate always free tier can front run your free graphql worker with 100,000,000 extra CDN hits - a possible saving of $50/month if just on paid workers. What's the catch? attribution?

[1]: https://community.cloudflare.com/t/how-to-cache-a-page-serve...


[removed]


Could you please stop posting like this? It's fine to make your point once but this sort of unsubstantive harangue doesn't add anything interesting, and it's nasty.

https://news.ycombinator.com/newsguidelines.html


64*0 = 0, so I think you mean "from 1 user to 64". No idea what the actual numbers are though.


Looks like the funding market isn't completely dead!

Good for them, product is very interesting. I'm going to keep an eye on this, its a very interesting space


Stellar team! Congrats Tim and Max!


@Max I loved watching your livestream of building Feedback Fish! Congrats on this product, I basically handrolled something like this using Cloudflare Workers. Would have been amazing to have this out the box. Good luck for the future


I kinda thought Apollo would buy GraphCDN. Both great products!


The media keeps spinning the world is about to end and it is impossible to raise fund. And yet here we are. $30M


Worth noting that a lot of fundraising announcements are made a while after the fundraising actually closes. Companies often time the announcement to also include other significant news/product updates, in this case a name change. It's very possible the round closed a few months back


Great Point. So we should continue to observe the VC and overall market. Personally I am not that pessimistic on the current economy.


$30M to solve a problem no sane project can possibly have

unless your product manager is trying to impersonate whatever FAANG he wants to work at


Seems like a good product, but also seems easy to replicate if it gets serious traction.


The "global data graph" is probably going to have network effects if they succeed. This is what makes otherwise less than great experiences (facebook et al) so sticky.

So the question of the technical ability to replicate a service is not always the critical factor.


I meant the cache specifically




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: