Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Run globally distributed full-stack apps on high-performance MicroVMs (koyeb.com)
98 points by edouardb on Aug 17, 2023 | hide | past | favorite | 30 comments
Hi HN! We’re Yann, Edouard, and Bastien from Koyeb (https://www.koyeb.com/). We’re building a platform to let you deploy full-stack apps on high-performance hardware around the world, with zero configuration. We provide a “global serverless feeling”, without the hassle of re-writing all your apps or managing k8s complexity [1].

We built Scaleway, a cloud service provider where we designed ARM servers and provided them as cloud servers. During our time there, we saw customers struggle with the same issues while trying to deploy full-stack applications and APIs resiliently. As it turns out, deploying applications and managing networking across a multi-data center fleet of machines (virtual or physical) requires an overwhelming amount of orchestration and configuration. At the time, that complexity meant that multi-region deployments were simply out-of-reach for most businesses.

When thinking about how we wanted to solve those problems, we tried several solutions. We briefly explored offering a FaaS experience [2], but from our first steps, user feedback made us reconsider whether it was the correct abstraction. In most cases, it seemed that functions simply added complexity and required learning how to engineer using provider-specific primitives. In many ways, developing with functions felt like abandoning all of the benefits of frameworks.

Another popular option these days is to go with Kubernetes. From an engineering perspective, Kubernetes is extremely powerful, but it also involves massive amounts of overhead. Building software, managing networking, and deploying across regions involves integrating many different components and maintaining them over time. It can be tough to justify the level of effort and investment it takes to keep it all running rather than work on building out your product.

We believe you should be able to write your apps and run them without modification with simple scaling, global distribution transparently managed by the provider, and no infrastructure or orchestration management.

Koyeb is a cloud platform where you come with a git repository or a Docker image, we build the code into a container (when needed), run the container inside of Firecracker microVMs, and deploy it to multiple regions on top of bare metal servers. There is an edge network in front to accelerate delivery and a global networking layer for inter-service communication (service mesh/discovery) [3].

We took a few steps to get the Koyeb platform to where it is today: we built our own serverless engine [4]. We use Nomad and Firecracker for orchestration, and Kuma for the networking layer. In the last year, we spawned six regions in Washington, DC, San Francisco, Singapore, Paris, Frankfurt and Tokyo, added support for native workers, gRPC, HTTP/2 [5], WebSockets, and custom health checks. We are working next on autoscaling, databases, and preview environments.

We’re super excited to show you Koyeb today and we’d love to hear your thoughts on the platform and what we are building in the comments. To make getting started easy, we provide $5.50 in free credits every month so you can run up to two services for free.

P.S. A payment method is required to access the platform to prevent abuse (we had hard months last year dealing with that). If you’d like to try the platform without adding a card, reach out at support@koyeb.com or @gokoyeb on Twitter.

[1] https://www.koyeb.com/blog/the-true-cost-of-kubernetes-peopl...

[2] https://www.koyeb.com/blog/the-koyeb-serverless-engine-docke...

[3] https://www.koyeb.com/blog/building-a-multi-region-service-m...

[4] https://www.koyeb.com/blog/the-koyeb-serverless-engine-from-...

[5] https://www.koyeb.com/blog/enabling-grpc-and-http2-support-a...




This is super interesting, but I have some questions.

I’ve explored running @Edge for performance gains and overall improved user experiences, but always have struggled with services like this (or fly.io or even just running my own VMs) and their data center locations.

Looking at your integrations page, for example, you call out PlanetScale so il use that as an example to illustrate my challenge. Koyeb has a region in SFO. PlanetScale’s closest region is in Oregon. For a database connection, that’s a lot of latency which likely undermines the performance gains of running a service at edge (at least in my use cases).

I’ve evaluated just rolling my own database replication for Edge and it’s not a huge deal but often finding information about data center providers to try and pair data with the compute can be challenging.

So the queries I’d like to pose: Are you able to provide Speedtest endpoints or data center information for collocating other resources near each deployed Koyeb region? Do you plan to offer lower level access to compute to address this kind of use case? Is there another implementation angle to this I am missing?


Agree, data location is indeed a central challenge when building globally distributed apps.

We picked the largest peering points in Europe and the US for the two first locations aka Washington / US-East and Frankfurt in Europe. For the following 4 locations which we announced last week in early access [1], we tried to pick the next best-interconnected locations on the world map: SFO / the valley, Singapore, Paris, and Tokyo.

We definitely need to do a better job in the doc [2], we can definitely provide some mapping matrix and will be working on some latency measurements/speedtest/iperf servers.

In this direction, did you look at PolyScale [3]? They do the job of database caching at Edge.

What do you have in mind regarding lower-level access to compute? We're looking at providing block storage and direct TCP/IP support if that's what you have in mind.

[1] https://community.koyeb.com/t/changelog-25-san-francisco-sin...

[2] https://www.koyeb.com/docs/reference/regions

[3] https://www.koyeb.com/docs/integrations/databases/polyscale


If you are comparing edge-computing providers, you should also check out EdgeNode (https://edgenode.com/)

Disclaimer: I am building EdgeNode with my friend.


This looks quite interesting, congrats on the launch!

Reminiscent of fly.io. Is it a direct competitor, or is there a major twist to it?

How do you handle apps composed of multiple services, if there isn't a configuration?

The pricing is a bit confusing by the way, the free tier says "16GB of RAM & 16 vCPU per service" while in reality it seems you only get the 512mb RAM instance.


Thanks! Hope you’ll like it :)

We have similarities with fly.io (Firecracker MicroVMs on top of BareMetal) and also some key differences:

- we directly integrate with GitHub to automatically build your application on push. We support building native code with Builpacks or from Dockerfile in addition to pre-built containers.

- we put a CDN in front of all your services to provide caching and edge TLS termination

- technically, our internal network is a service mesh built with Kuma and Envoy

- overall, we aim to be a bit higher in the stack, instead of looking at providing low-level virtual machines, we want to focus on productivity features like preview environments

We actually thought zero infrastructure configuration. At this stage, there is some basic setup to do for a multi-service app. You need to configure the HTTP routes. We aim to add as much automatic discovery of the codebase as possible.

Thanks for the feedback on the pricing. $0 is actually the price of the plan and we provide $5.5 of free credit in the plan. It seems the “Up to” was somehow skipped in the “16GB & 16 vCPU per service”, this is indeed confusing.


Do you have any plans to get into managed databases and other storage / state management solutions? I feel like 90% of what from a cloud provider is managed Postgres (with strong durability and availability guarantees) and some way to run compute (I don't even really care what or how that works for the most part). Everything else is a nice to have.


We are currently working on our managed Postgres offering. Should be available in technical preview in September. Other services like providing object storage are planned too but I don't have any ETA to share for now :)


Awesome! Object storage is also useful but less crucial because integrating with external services for object storage is much less painful :)


Who do you see as your ideal customer here?

For a startup or side projects I doubt I have need for edge computing and using Digital Oceans app services should be good enough. Obviously you would be a contender here but not clearly ahead of DO because the need for edge computing is not apparent for most things. Where as DO sell their own compute and are more established.

On the other hand established companies probably want more control over the infra that a big cloud would give you. You have compliance and DR to think about as well as security questionnaires to fill in.

I am guessing there is a gap in the middle? Something like a one year old startup?

I do like the pricing model. It gives me confidence. You are clearly subsidising the free tier at $5.50/m and there should be no “shocks” that you get on other PaaS where the free tier is redicously generous in some dimensions but then the costs become untenable in other dimensions. With your model the free tier only differs in that there is a free allowance not in how things are measured. Which is good!


My understanding is that Fly.io also started using Nomad but ended up running into big reliability issues at scale across many regions. I'm curious if you all are using it differently or haven't gotten to that scale yet.



I’d say we don’t use it exactly the same way: we don’t have a single global nomad cluster, which is a critical difference.

We have one Nomad cluster per region, which we “federated” ourselves using our own orchestrator. This basically reduces the latencies between agents and each cluster, reduces the failure domains, and also avoids encoding all the constraints in one single Nomad job definition.

I'm not so much worried about scaling with our setup but the performance of the autoscaler might be a concern in the future.


Congratulations on the launch! It seems very intriguing! I have a some open questions:

- Where is the company based and what is the jurisdiction? Probably you forgot to add the imprint :o)

- Is there a difference between edge and non-edge locations?

- Can data storage be tied to a location?

- Is it tied to github or can it be used with self-hosted gitlab?

- Is there a rough ETA for databases, especially postgres(-like)?

Thanks in advance!


Thanks for all the questions!

We're headquartered in Europe, you'll find the legal in our terms :) https://www.koyeb.com/docs/legal/terms

There is a difference between edge and non-edge locations (we call them core): edge locations terminate the TLS connection, do caching, and route traffic to the nearest core location. We explained how this works in this post [1] and this talk [2]. The TLDR is: If this core location is set up to run an instance of your service, it will send it to the right machine in the location. Otherwise, it's going to be routed to the core location where an instance is running.

Data storage can be tied to a location as you're deciding where you're application is running: if you ask us to run an application in Frankfurt, Germany we're not going to move it to the US nor to any other location.

The build engine is tied to GitHub but you can deploy a pre-built Docker container. GitLab has been highly demanded [3], so this is definitely on the list of things we're considering implementing.

Databases should land on the platform in September in early access [4], we're actively working on it.

[1] https://www.koyeb.com/blog/building-a-multi-region-service-m...

[2] https://www.youtube.com/watch?v=IB93WCoroL8

[3] https://feedback.koyeb.com/feature-requests/p/git-driven-dep...

[4] https://feedback.koyeb.com/feature-requests/p/managed-postgr...


Thanks, it sounds great!

> The build engine is tied to GitHub but you can deploy a pre-built Docker container.

That's fine!


Really needs another tier in between. Hobbyist are looking for 5-20 range. Not a free to 79 bucks jump.


The starter plan is actually a pay-per-use plan, so you're starting at $0 and spending depending on your usage. You don't need to move directly to $79. Does that make sense?

It seems we need to do some work on the pricing page.


I see. Thanks


Just tested it with my Quickwit engine :)

Easy setup and fast deployment, pretty happy so far.

I have one question though: I did not see some object storage feature, is it possible to combine Koyeb with some kind of object storage service? Maybe an external one? What's you recommendation on this?


Glad to hear that! We plan to provide object storage in the future. For now, our recommendation is to rely on a specialized provider for that, i.e. S3, GCS, R2, Wasabi, etc.


I would need a very good bandwidth between the instances and the object storage. On AWS, we manage to get 1 GB/s for one instance. Is there one particular service that will perform particularly well? Also, I don't really want to pay for data transfer...


How did you get so many great company logos onto your website? Did each one have to go through a legal approval? Was a struggle for us at my old startup.


Are the 16 vcpu dedicated amongst the vms of your service, or are they shared with potentially hundreds or thousands? on the host?


For now, the vcpus are shared for all types of MicroVMs with a constant ratio for each GB of RAM.

We're planning to release instance types with dedicated CPUs for applications that need them.


If I wanted to create a microvm with 4G RAM and 2CPU or 4G RAM and 8CPU would that work? Or is it currently only possible to create 1:1 vms?

Also wanted to note, that I came to this question by reading the website explanation on the benefits of microvms, to me it seems to heavily favor the operator vs the customer by touting the oversubscription capabilities. This feels like something that should not be presented front and center to prospective customers?

With lack of nested virtualization and compatibility with qemu cloud images I also wonder how much benefit is really there for microvms. The fast boot time feels like the only benefit, but how much faster than qemu when you add on API overhead. Offering both hypervisors and have the customer choose could be interesting.

Congrats on the launch and thanks for sharing!


They're really running out of .com domains aren't they?


Is this like render.com?


There are similarities, one key difference is that we're running on top of high-performance BareMetal servers with high-end CPUs.


I’m so excited for your launch, I cannot wait to get in and play with it.


Great to hear it! Let us know how it goes :)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: