Hacker News new | past | comments | ask | show | jobs | submit login
Firecracker – Lightweight Virtualization for Serverless Computing (amazon.com)
368 points by leef on Nov 27, 2018 | hide | past | favorite | 110 comments



What this is allows, and I'm hoping a full fledged service will be announced on Thursday or Friday, is running containers as Lambdas. i.e. if you application starts fast enough, you can just set a container to start and run as a request comes in. It can also shut down when it's done running.

This allows things like per second billing for container runs, serverless containers (there's no container running 24/7, only when there's traffic), etc.


Why run a container? What value does that abstraction provide here?

To my mind this completely negates any value proposition of the container. The only thing missing, at face value, is something as straightforward as the Dockerfile for building base images. I imagine that shouldn't be hard using things like guestfish etc in guestfstools.


Want to run Ruby on Rails on Lambda, with no changes from the servers I run on my laptop. Or maybe I want to run Crystal. Or maybe I’m writing my own language. Doesn’t really matter.

Lambda works great as a deployment and execution model. This allows anything to run on Lambda, not just specially prepared runtimes.


It's a VM, right? So whatever you put in the base image will be available, just like with a container. It's just fast to provision and small.


In this case the base image is then a packaging format, which takes us back to building special AMIs for this. Used to do EC2 deployments like this in the early days, using Packer.io.

Prefer containers because build once run anywhere, as opposed to build for each deployment target.


So essentially pairing Firecracker with Lambda will reduce Lambda down to a simple trigger that can spin up one of these VMs where before Lambda ran the code?


Yes, and the fact that each Lambda executor is a VM allows the execution of any code in any language using any server.


Your container = your environment.

When your container is not running (say, 99% of time), other customers' containers are running. No need to ever boot the kernel, etc.

One might say that an unikernel has advantages over it. But it also has a higher barrier to entry.


It's a KVM microinstance. Your image = your environment. Same as with a container.


I'm expecting/hoping for this as well. GCP already has something like this in alpha - https://services.google.com/fb/forms/serverlesscontainers/ More info on it here in "Serverless containers" section: "https://cloud.google.com/blog/products/gcp/cloud-functions-s...


On its website in the FAQ they say that it can't run Docker and others, yet. I hope this is coming soon too!


> containers as Lambdas.

How similar is AWS Fargate to what you're describing?


I need to run 1+ Fargate containers 24/7, which is useless and wasteful.

With Fargate-Lambda crossover I wouldn't be running anything 24/7, and it would be a lot less resource intensive than one Lambda-Container per request as well.

Google's App Engine gets / got this right when they first launched, but to make it work they had to demand apps be written for their sandbox (like AWS Lambda), because of which the model isn't as general purpose. Firecracker would allow regular containers to be used this way, making a Firecracker service the first service to allow general purpose servers to be started and stopped (all the way to zero) based on incoming traffic.


I think with the App Engine Standard generation 2 runtimes you don't have to write to their sandbox anymore. It still has to be one of their supported languages though, instead of any arbitrary server.

https://cloud.google.com/blog/products/gcp/introducing-app-e...


Serverless Container support is coming to GCP: https://services.google.com/fb/forms/serverlesscontainers/


Very easy to exec() into an arbitrary binary from these 2nd generation runtimes.

(and, as thesandlord notes below, arbitrary containers is the idea behind g.co/serverlesscontainers)

Disclaimer: I work on GCP.


Another missing piece in addition to the billing aspect mentioned elsewhere, is all the existing event based integrations lambda provides. E.g. react to kinesis, sqs, sns etc. events. having aws manage the event plumbing in addition to start/pause/stop is really nice.


Pricing model is not serverless. The basic serverless principle is no-use-no-pay.


Theoretically, if container start up times are around 125ms it should be possible to achieve this with Fargate + Knative's "scale to zero" functionality[1]. AWS is already working on improving Fargate/Kubernetes integration.

[1] https://cloud.google.com/knative/


Doesn't a Kubernetes master on Fargate cost a significantly non-zero amount of money? Pretty sure it does.


Sure, but if you already have one then there is no incremental cost.

It would be great if they announced that they were gonna remove EKS master costs altogether. Technically Firecracker should make it possible for them to run that infrastructure more efficiently :)


Only dev instances should see zero traffic, any production site will have > 0 load at all times.


That's not completely accurate.

- If we're talking about a business that provides a service to local companies, there are quite a few hours during the week where everyone is either asleep or enjoying their weekend. Not every company has millions of users spread across every time zone; some companies provide a niche service to a small number of high paying users.

- Lots of developers have small hobby projects that are inactive for most of the day/week.

Scale to zero can be convenient, but it's not usually a make-or-break thing.


Those folks would be ideally suited for Digital Ocean, Vultr, etc. All of the function as a service providers have a cost model that only make sense for < (some really low qps). A $10 DO instance can handle how many $$ worth of FuncaaS?


I mean, the first 1 million Lambda invocations and 400,000 GB-seconds of compute are free each month... you can do a lot of things within the free tier alone. That free allocation represents about $7 worth of Lambda resources. Assuming that it cost the same to write your application to run in Lambda as it would to write it as standalone, which it almost certainly doesn't due to (probably) immature frameworks, that would be really nice for certain businesses. I agree that it is a niche, and savings of a few dollars a month aren't generally valuable to viable businesses.

What I was getting at is that the "scale to zero" feature with Knative is rather worthless if you spend more money just running the Kubernetes master on EKS alone than you would spend running a $5 or $10 per month DigitalOcean instance.

Lambda scales all the way to zero, and it's free when it's at zero. You just pay for what you use. Actually, you pay for what you use minus $7, since every account always gets the free tier for Lambda.


I was using the DO instance as a standard unit, so on DO $10 gives you (1vCPU 2GB)

    5M GB seconds (2 * 86e3 * 28)
    2400M function invocations @1k qps, (86e3 * 1e3 * 28 / 1e6)

with that, the DO query load represents $17k

Does the math checkout?

I know which model allows me to "scale to zero" faster. If I hand you $10, you will give me a nice used Honda?


The math does not check out.

Using your numbers and current Lambda prices:

5M GB-seconds * $0.00001667/GB-sec = $83.35

(5,000,000 * 0.00001667 = 83.35)

2400M function invocations * $0.2/million = $480

(2400 * 0.2 = 480)

total = $480 + $83.35 = $563.35/month, which is nowhere close to $17k/mo. I have no idea how you got that number.

I also find it highly unlikely that most businesses are servicing 2.4B requests per month on their API. In my opinion, you either have to be an absolutely enormous monster of a business or have a really unusual business model to be at that level of utilization and not be huge.

Whatever your business, you're unlikely to be spending $10/mo in resources to service nearly 30 billion requests per year. You're probably going to need more than $10/mo just to store the access logs for your API, let alone useful customer data!

In reality, a lot of entire businesses would be much lower in utilization than that. The downsides of a single DigitalOcean droplet are many: a single point of failure, and you'll never achieve high availability if you apply updates regularly and reboot, unless you have multiple $10/mo droplets and a load balancer. You'll also have high latency to customers outside of your region of the world, unless you run a HA-cluster of multiple droplets in each regional datacenter that you care about. Call it three droplets per region and we'll say you want to run them in five regions, so that's $150/mo just in droplets alone, plus a $10/mo load balancer per region, so $200/mo. Did I mention that you or your engineers are going to be responsible for doing the maintenance and replication across regions? Surely the number of engineering hours devoted to this upkeep will be worth more than $350/mo. Huh, I guess we just justified Lambda's costs.

And again, I never claimed this was some kind of snake oil magical solution.

I don't know why you think I'm opposed to the DigitalOcean solution and some sort of salesman for Lambda. I use DigitalOcean heavily for my personal stuff, and I almost never use Lambda, even though I like the idea. You've created a strawman, and you're trying to expertly knock it down... except for the whole math thing as noted above.

If we really dig into this, neither the DigitalOcean solution nor the Lambda solution discussed accounts for the cost of running your stateful infrastructure, whether it's a traditional RDBMS, some NoSQL system, Kafka, or just a giant object store like S3.

My singular point in this entire thread was that scale-to-zero is worthless if the solution that enables scale-to-zero costs more than not scaling to zero. If DigitalOcean is the solution that costs less than scaling to zero, then obviously that is more valuable than the solution that scales to zero.


I used that $7 amount for 1M invocations. I trust your math, you're passionate and it sounds like you have first hand knowledge.

The economics are laid before us like a golden fleece.


I edited my comment a few times after I posted it. I hope that you are responding to the most recent version, but your response leaves some doubt in my mind that this is the case. I am certainly passionate about recognizing that there are no perfect tech solutions, including DigitalOcean and Lambda.


Yep, if with KNative, it is possible.


There's a Github Pages FAQ describing why it was made and how it fits with other solutions: https://firecracker-microvm.github.io/

and a high-level design document about how it works https://github.com/firecracker-microvm/firecracker/blob/mast...


Interesting name choice. When I clicked on the link and saw the name and design, my first thought was, "Is this a Firebase knockoff...?" [1] ... and then I scrolled to the bottom to see the copyright and saw this project is by Amazon Web Services.

[1] https://firebase.google.com


> Firecracker was built in a minimalist fashion. We started with crosvm and set up a minimal device model in order to reduce overhead and to enable secure multi-tenancy. Firecracker is written in Rust, a modern programming language that guarantees thread safety and prevents many types of buffer overrun errors that can lead to security vulnerabilities.

This is awesome! Really excited to try this out!


This is huge! It basically removes the VM as the security boundary for something like Fargate [1]. This should lead to a significant reduction in pricing since Fargate will no longer need to over provision in the background because VMs were being used even for tiny Fargate launch types.

It should hopefully eliminate the cost disparity between using Fargate vs running your own instances. Should also mean much faster scale out since you containers don't need to wait on an entire VM to boot!

Will be interesting to see what kind of collaboration they get on the project. This is a big test of AWS stewardship of an open source project. It seems to be competing directly with Kata Containers [2] so it will be interesting to see which solution is deemed technically superior.

[1] https://aws.amazon.com/fargate/ [2] https://katacontainers.io/


Indeed, this seems very similar to kata+runv+kvmtool(lkvm). I'm curious why they don't provide a comparison. Here's what I gathered:

- it seems to boot faster (how ?)

- it does not provide a pluggable container runtime (yet)

- a single tool/binary does both the VMM and the API server, in a single language.

Can anyone else chime in ?


> I'm curious why they don't provide a comparison

They do, if you read the FAQs: https://firecracker-microvm.github.io/#faq


I did, and it does not answer my question, because they only address the runv+qemu usecase, not the runv+kvmtool one:

Kata Containers is an OCI-compliant container runtime that executes containers within QEMU based virtual machines


From memory the original version of Intel Clear Containers had its own kvm based vmm but they moved back to qemu (or a more minimal patched version they maintain). They are working on containerd support so should be similar to Kata soon.


That's what I thought, too, but re-reading the articles, they were using a patched kvmtool: https://lwn.net/Articles/644675/

So this is exactly what runv's lkvm backend is doing (except kvmtool isn't patched anymore). And Intel Clear Containers do not exist anymore(many broken links on clear linux's website subsist, though), since they moved to Kata as well:

https://01.org/blogs/2017/kata-containers-next-evolution-of-...


It sounds like it’s already being used in Lambda and Fargate, though I’m not sure how long that’s been the case:

> Firecracker has been battled-tested and is already powering multiple high-volume AWS services including AWS Lambda and AWS Fargate


Clear containers (now called kata containers) did this more than three years ago, with similar performance numbers (sub 200 ms boot times). It is frustrating, but not surprising, to see the same regurgitated solution receive this much excitement. The firecracker documentation also does not mention the similarity with prior work, oh well.

[Not affiliated with Intel in any way---just a long-time proponent of the clear containers approach.]


The FAQs on the Firecracker website[1] specifically address the difference between Firecracker and Kata Containers. The main thrust being that they have decided not to use QEMU and have instead chosen a much more minimal "cloud-native" oriented approach that deliberately abandons certain features in order to gain greater security, efficiency and agility going forward. They also decided to implement it in Rust.

Based on the the responses I have seen from non-Amazon employees with experience in this space[2][3][4], it looks like their approach is solid.

It should also be noted that one of the main architects of Firecracker was formerly the project lead for QEMU[5][6]

1.https://firecracker-microvm.github.io/#faq

2.https://twitter.com/bcantrill/status/1067326416121868288

3.https://twitter.com/jessfraz/status/1067286831287418881

4.https://twitter.com/kelseyhightower/status/10672947809488322...

5.https://twitter.com/jessfraz/status/1067282499938721792

6.https://twitter.com/anliguori/status/1067293131366785024


OK I had missed the kata containers blurb in the FAQ, thanks for pointing it out. In fact the tweets make my point: we are all so blinded by new shiny releases that we forget their highly incremental nature.


Sure, there are going to be some people that are excited by the fact that something seems new or just because it is written in Rust, but jessfraz and bcantrill certainly don't fall into those categories. They have a lot of experience with Operating Systems, VMs and containerization and I don't get the impression that they are eaisily impressed by shiny things. Note that they all work for or worked for AWS competitors (Google/MS/Joyent).

I think what is impressive about Firecracker is that they have chosen to reuse a lot of the right things (Linux/KVM/Rust) while also taking a new approach and rethinking important assumptions (No BIOS, no pass-thru, no legacy support, minimal device support).

In my opinion the Firecracker FAQs give sufficient mention to parallel projects and tools they have built on like Kata Containers, QEMU and crosvm. The developers certainly seem open to collaboration with those communities.

AWS doesn't have much of a track record in terms of leading an Open Source projects so some skepticism is understandable, but I think what we have seen so far is a very good start.


As a QEMU developer, this is very exciting. Even though there are some differences in the approach to the device model, they are not important in the grand scheme of things and in principle there is no reason why QEMU could not serve the same uses as Firecracker. It's just like Linux runs on anything from 16MB routers to supercomputers, and it means there is a lot that we can learn from Firecracker.

In fact we are considering integrating a more secure language than C in QEMU, even though we're just at the beginning and it could be C++ or Rust depending on whom you are talking to. :) It's possible that this announcement could tilt the balance in favor of Rust, add it would be great if QEMU and Firecracker could share some crates.


These days, I would expect bcantrill to be excited by something written in Rust :)


Hey now -- I'm not quite that easily impressed! ;) This is a problem domain that I have suffered in[1] -- and we have recently moved from KVM to bhyve[2] for several of the same reasons that motivated Firecracker. Not that it hurt that it was in Rust, of course... ;)

[1] https://www.youtube.com/watch?v=cwAfJywzk8o

[2] http://bhyvecon.org/bhyvecon2018-Gwydir.pdf


Ha! I wasn't trying to imply that it would only take Rust, for sure. :)

I am excited that everyone seems very excited.


After Amazon released its implementation the whole eco system profits, as it creates diversity and buzz around that topic. I think it's great to have (open source) alternatives, especially with the marketing weight of amazons solution entering "playing field". Also: is it clear that kata was first? Three years doesn't sound like they've been miles ahead.

[Not affiliated with either side]


yep. happens all the time. people flock to brand association because it "must be good". halo effect or some other cognitive bias in action.


> microVMs, which provide enhanced security and workload isolation over traditional VMs, while enabling the speed and resource efficiency of containers.

Reminds me of rkt + kvm stage 1 https://github.com/rkt/rkt/blob/master/Documentation/running...

Too bad it didn't take off.


This looks great, I’m just wondering what Amazon’s motivation for open sourcing it is. It seems like some pretty critical secret sauce for making services like Lambda and Fargate both secure and efficient.


Google recently open-sourced Gvisor, which although implemented differently solves a similar problem. Possibly Amazon wants to encourage other vendors to build integrations with Firecracker rather than Gvisor.

https://cloud.google.com/blog/products/gcp/open-sourcing-gvi...


Also cloudflare announced workers using isolates.


Pushing the adoption of "serverless" - benefits Amazon ultimately as it's the largest provider.


Right. In the end, AWS saw containerization as an existential threat, and serverless is its response to the commoditization of AWS (and other specialized cloud vendors) by containerization technology. Serverless helps AWS re-couple your application back into the specialized AWS vendor environment, once more requiring you to keep specialized and costly AWS-specific knowledge on-hand in order to build and deploy your application. It gives them plenty of room to advertise and provide all of those edge case services to you once more (and their usage charges) and also helps them prevent you from treating AWS like a rack of networked CPUs to serve as the substrate for your application containers.

People (mostly AWS folks -- dig a little deeper into who is writing much of the serverless blog posts out there) keep pushing the "serverless is containers" but that's just a tactical response. Add a layer of abstraction and it's very clear why AWS is betting so hard on serverless. Originally, AWS commoditized the old datacenters by providing the same network/CPU substrate, but at a higher cost because you outsourced the management of those resources to AWS. And AWS slowly dripped out new and convenient services for your application to consume, allowing you to outsource even more of your application needs to this one vendor. And while the services offered by AWS were just a little bit different, they were functionally similar. And that's how you locked yourself into using AWS instead of CoreColoNETBiz or whatever datacenter you were using before. I remember one of the first major outages of us-east-1, which caught most of the internet with its pants down (interestingly, the answer was just to give more money to AWS for multi-region redundancy). AWS had a pretty good thing going: Outsourcing the management of all those resources to AWS is expensive! But that's when containers came along and people at AWS started to take notice. With containers, people could de-couple their applications from Dynamo and Elastic Beanstalk and VPC and all those specialized services that cost so much time/money. Instead, you could just cram all that shit into containers, without needing to set up IAM roles or pore over Dynamo documentation or dump so much time into getting VPC set up just right. And that's the whole point of containerization: Easily build your services in a homogeneous environment with exactly the software you want to use and eliminate that technical debt of vendor lock-in and the enormous cost center of specialized vendor knowledge (e.g. Dynamo, IAM, VPC, etc etc). Treat the cloud -- any cloud -- like a bunch of agnostic resources. Docker commoditized the commoditizers.

And serverless is how AWS plans to get you to re-couple your application tightly to their specialized web of knowledge and services. They get to say that you're still using containers, but they need to gloss over the fact that you're locked into the AWS version of containers. You cannot "export" your specialized AWS-only knowledge of Fargate or Lambda or API Gateway or ECS to Google Cloud or Azure or some dirt cheap OVH bare metal. You're tightly re-coupled to AWS, having bought into their "de-commoditization" strategy. Which I need to stress is totally fine, if you're okay with that. It just needs to be made clear what you are trading off.


Having worked at AWS, I have to disagree with you.

No-one that I worked with saw containerization as a threat. And why would they? At the VM level you can already paper over differences between cloud providers and I don't think that anyone at any of the large cloud providers lies awake at night worried about this.

I also don't understand why serverless would couple you to a particular cloud provider. All the big cloud providers provide serverless features and it never takes long to see feature parity.

What ties you to a cloud provider (or any company) is when you use features unique to that provider. And presumably you're using those features because the value they add outweighs the perceived costs of lock-in or the cost of implementing it yourself.


The point is that each serverless implementation is different enough that even if you are using the same feature, the cruft around that is different enough to provide an certain amount of lock in.


Sure and this is to be expected. It costs time and money to align yourself with someone else's implementation and unless your customers demand alignment (e.g. S3-compatible storage interfaces), you're probably not going bother.

Again, this comes down to cost-benefit calculations. If some companies find that proprietary feature X from cloud Y provides a bigger (perceived) return on investment than not using feature X, then they are likely to use it. If company X later shafts them, they have to swallow more costs to migrate away but hopefully (for them) they took this possibility into consideration when they made their original decision.


I mean, the arguments and return values are different. Seems like a couple hours of work to convert from one to another, at most.


S3 is the bigget coupling, or data storage. Why do you think containerization threatens AWS? It doesn't. Vendor lockin happens naturally, IAM are a vendor lock in feature, and I didn't see a lot people even mention about it. Migration off those sruff is incredibly hard and things can easily go wrong.


My big question is: is this something only exciting for people doing lambda at massive scale?

Qemu is exciting technology and has paved the way for all kinds of interesting layers. So, creating a slimmed down improvement that really makes it faster and provides a new lambda-ish execution context is great.

I'm sure Amazon cares about that. I'm sure people doing millions of lambda calls a day care about that.

But, if I'm an entrepreneur thinking about building something entirely new, is there something I'm missing about this that would make me want to consider it?

Lambda and Firebase Functions are exciting partially because they break services into easy to deploy chunks. And, perhaps more importantly, easy things to reason about.

But that's not the big deal: the integration with storage, events, and everything else in AWS (or Firebase) is what really makes it shine. It's all about the integration.

When I read this documentation, I'm left wondering whether I want to write something that uses the REST API to manage thousands of micro vms. That seems like extra work that Amazon should do, not me.

Am I missing something important here? Surely Amazon will integrate this solution somewhat soon and connect it to all the fun pieces of AWS, but the fact that they didn't consider or mention it makes me think it is something I should not consider now.


I really hope this helps with the cold start times on Lambda. We were currently looking heavily into moving our API from Lambda to EKS, but if this impacts cold start times, I think we will look at how it ends up looking like in practice.


Most code start time problems on Lambda I've seen are VPC related - public network Lambdas start in milliseconds, with the main lag being the userspace code startup time.

Starting a Lambda inside a VPC involves attaching a high security network adapter individually to each running process, which is likely what takes so long. I assume AWS is working on that, though, they've claimed some speedups unofficially.

If your security model allows, try running your Lambdas off-VPC.


The VPC startup times are insane, so we quickly move our lambdas out of that, accepting the trade off.

Our normal cold starts are in the 1-2 second range, and the app initialization comes after. Too high for an API facing users :/


We got around this with a bit of a hack - use a CloudWatch event to trigger a dummy invocation of your function every five minutes. This keeps the container "hot" and reduces the start time (and is negligible cost-wise). This won't fix the cold starts when the function scales up, but it does reduce latency for 99% of our API requests.


We're already doing this, but unfortunately this doesn't work well when you are expecting lots of parallel calls, and various times.

The obvious solution would be to just merge microservices into the same lambda, but then we'd rather switch to EKS or smth, and actually be able to utilize microservice architecture fully.

To give a little more context, we have a bunch of microservices exposing GraphQL endpoints/schemas. We then have a Gateway which stitches these together, and exposes a public GraphQL schema. Because of the flexibility (by design) of GraphQL, we can easily end up invoking multiple parallel calls to several microservices, when the schema gets transformed in the Gateway.

This works really well, and gives a lot of flexibility in designing our APIs, especially utilizing microservices to the full extent. It also works really well when the lambdas are already warm, but when we then get one cold start, amongst them all, suddenly we go from responses in ms, to responses in seconds, which I don't think is acceptable.

We've been shaving off things here and there, but we are at the mercy of cold starts more of less. So our current plan is to migrate to an EKS setup, we just need to get a fully automated deployment story going, to replace our current CI/CD setup, which heavily uses the serverless framework.


Isn't using a server better in this case? Or Lambda have some benefits in this setup?


It's still insanely cheap. You could have millions of executions per month and only pay $0.50. But if you needed to, it could scale up to billions of invocation nearly instantly, something a standard server would have trouble doing as easily as Lambda does.


Then again, you are doing the hacks you describe because it is not scaling up nearly instantly. The cold start delays are not only an issue when scaling from zero to one, they hit you whenever you scale the capacity up.


This. I continue to hear about hacks of people running ping to keep a single instance warm. But that doesn't cover periodic changes in capacity needs nor spikes. I would think to avoid cold starts all together you'd need a pinger that sent exactly the load difference between peak load and current load. I would love to hear if anyone is keeping Lambdas warm at more than n=1 capacity.


>if anyone is keeping Lambdas warm at more than n=1 capacity

There are various ways to do it, but I feel that it's a very suboptimal solution, and it still won't guarentee no cold starts happen.

I've personally come to the conclusion, that lambda is very nice for anything non-latency sensitive. We are still using it to great effect for e.g. processing incoming IoT data samples, which can vary quite a lot, but only happens in the backend, and nobody will care if it's 1-2 seconds delayed.


That's odd. Are you running the interpreter runtimes (NodeJS/PHP), compiled binaries (Go) or VMs like Java?


We are using Node.js. From what I’ve seen online, Python should be the fastest though.

Edit: wanted to add, that from what I’ve gathered from people testing online, bundle size didn’t really matter, but perhaps someone else has some information that points to the contrary?


Bundle size is rarely equivalent to size of initial executable code. i.e. I can have a giant photo of my dog in the bundle but it won't necessarily affect the start up time of the node index.js. I think there is some kind of effect in that the bundle must be downloaded to the Lambda container server from S3, but that seems pretty fast, and it's likely cached there for a while.


One solution is invoking a scheduled Lambda (with a test payload) at regular intervals to keep the function warm.


The crosvm and Rust have me intrigued. I was hoping for something like this since I saw the first hints of Rust showing up in ChromeOS in crosvm.

A compare/contrast with Kata Containers would also be interesting. Their architectures look similar. (Kata Containers [1] being another solution for running containers in KVM-isolated VMs, that has working integrations with Kubernetes and containerd already. Not affiliated, but I'm tinkering with it in a current project, though I'm also now keen to get `firecracker` working as well.)

Obviously, if nothing else, qemu vs crosvm is a big difference, and probably significant since my understanding is that Google chose to also eschew using qemu for Google Cloud.

[1]: https://katacontainers.io/


Kata Containers is a lot of infrastructure for running containers and it uses QEMU to run the actual VMs. Firecracker just replaces the QEMU part and we're eager to work with folks like the Kata community.

I love QEMU, it's an amazing project, but it does a ton and it's very oriented towards running disk images and full operating systems. We wanted to explore something really focused on serverless. So far, I'm really happy with the results and I hope others find it interesting too.


We felt the same way about QEMU before we started crosvm. Glad to see you all found some use out of it.


The devops training site katacoda.com will be interesting to watch. They spin up and tear down _so_ many VMs, their cloud bill must be monstrous. Firecracker is much leaner, so they would save a lot of cycles by spinning up Firecracker over Kata.


Katacoda has nothing to do with Kata Containers...

I'm not sure how you can make any of the conclusions anyway, unless you know a lot of seemingly private details about how KataCoda is implemented.


It’s QEMU without all the legacy stuff, they also open sourced it, interesting.


QEMU can do much more than this.


Which is exactly the problem.


That’s why the attack surface is way larger and harder to keep an eye on.



@zackbloom, @kentonv hint hint. Isn't this roughly the same memory footprint as a Worker? CONTAINERS ON ALL THE CLOUDFLARE THINGS!


Heh. Truthfully, what I'm most excited about right now is being able to start a worker in less time than it takes to make an internet request. When you can do that you get magical autoscaling and it becomes just as cheap to run it in hundreds of places as one. As long has you have to invest ~100ms of CPU to get one of these VMs running I'm not sure it will have quite the same economics.


Yeah, jokes aside I simply don’t think it makes sense to run full processes on the edge. Not yet, anyway.

Script isolates makes a lot of sense with current hardware limitations, but full processes at the edge are coming sooner or later.


That would make me a little sad. I'm not excited about the idea that we figured out the ideal way for a program to be encapsulated in 1965 and it will never change.


You still have a full Linux kernel running inside the vm though?l with Firecracker versus essentially a fiber with cloudflare.


If you can implement it by tomorrow afternoon before the Andy Jassy keynote you might be able to steal some thunder.


I’m very excited to play with this technology in the same way I love playing with Elixir/Erlang and userland concurrency models. I also love the idea of docker (and use it daily) but dislike the ergonomics. My first thought is, particularly with the emphasis on oversubscription, how does the kernel of the host schedule work?


still seems much slower than the model used by Cloudflare for what they call "workers."[1] A recent blog post a few weeks back was the subject of considerable discussion here[2], and it seems to me to be doing much the same thing as Firecracker, but still faster because there's less overhead. But maybe I'm missing something.

[1] https://blog.cloudflare.com/cloud-computing-without-containe...

[2] https://news.ycombinator.com/item?id=18415708


> But maybe I'm missing something.

From the "Disadvantages" section of your first link:

"No technology is magical, every transition comes with disadvantages. An Isolate-based system can’t run arbitrary compiled code. Process-level isolation allows your Lambda to spin up any binary it might need. In an Isolate universe you have to either write your code in Javascript (we use a lot of TypeScript), or a language which targets WebAssembly like Go or Rust."

"If you can’t recompile your processes, you can’t run them in an Isolate. This might mean Isolate-based Serverless is only for newer, more modern, applications in the immediate future. It also might mean legacy applications get only their most latency-sensitive components moved into an Isolate initially. The community may also find new and better ways to transpile existing applications into WebAssembly, rendering the issue moot."


the way i see it, firecracker is more flexible but cloudflare workers isolate is faster. amazon can't afford the limitation of Isolate hence this project.


"Process Jail – The Firecracker process is jailed using cgroups and seccomp BPF, and has access to a small, tightly controlled list of system calls."

So basically, a gVisor alternative?


Firecracker contains a machine emulator. This emulator will jail itself before launching the OS to reduce the attack surface the emulator has towards the host.


gVisor doesn't use KVM:

"Machine-level virtualization, such as KVM and Xen, exposes virtualized hardware to a guest kernel via a Virtual Machine Monitor (VMM). This virtualized hardware is generally enlightened (paravirtualized) and additional mechanisms can be used to improve the visibility between the guest and host (e.g. balloon drivers, paravirtualized spinlocks). Running containers in distinct virtual machines can provide great isolation, compatibility and performance (though nested virtualization may bring challenges in this area), but for containers it often requires additional proxies and agents, and may require a larger resource footprint and slower start-up times."


Yeah but one of the main ways in which gVisor provides security is by intercepting system calls and strictly limiting which calls can be made. Firecracker may use KVM instead of running entirely in usermode, but as far as most of us are concerned, that's an implementation detail. The pertinent question is whether the price of security is limiting the possible system calls, which means that Firecracker won't be able to run arbitrary containers, just as gVisor doesn't guarantee that it can run arbitrary code (which may require filtered system calls).


That’s not true. Your guest application has access to all Linux system calls in the guest VM.

You can see here the security model: https://github.com/firecracker-microvm/firecracker/blob/mast...

The firecracker process itself is limited in the system calls it can make, but kvm allows the guest Linux process the ability to expose a full set of system calls to end user applications.


Does this provide any multi host cluster management capabilities?


Does it support Windows?


https://firecracker-microvm.github.io/ says

> What operating systems are supported by Firecracker?

>

> Firecracker supports Linux host and guest operating systems with kernel versions 4.14 and above. The long-term support plan is still under discussion. A leading option is to support Firecracker for the last two Linux stable branch releases.


KVM-based, so no it doesn't.


KVM supports Windows just fine, which is why you can run Windows on GCP and Openstack. And Firecracker seems to support enough of a machine to boot Windows as long as the windows instance has support for libvirt disk devices and a libvirt NIC.

However, it seems they boot in a slightly unconventional way. They take a elf64 binary and execute it. This works for Linux and likely some other operating systems that can produce elf64 binaries. Windows supports legacy x86 boot and UEFI, but likely not elf64 "direct boot".

So if you can get windows into an elf64 binary and have it run without a GPU you could have it boot. So, likely not. But the reason isn't due to KVM.


Can someone explain me how does this work? Is it an orchestration service for containers like Kubernetes or is it any different?


I am extremely excited by this. i wonder if this can be used to provision jit kubernetes workers.


How does this compare to containers?


Containers share the OS kernel and some services. This is a virtual machine monitor, so it deals with virtual machines. A container can only run Linux containers.

Firecracker can likely run other operating systems, such as IncludeOS. You can't run those in containers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: