Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How are you using unikernels?
120 points by austinjp on May 28, 2021 | hide | past | favorite | 34 comments
The HN conversations around unikernels suggest that they're not ready for production yet [0] but feel free to set that record straight.

In the meantime, a handful of organisations/individuals seem to be working on becoming "Docker for unikernels". That's probably an unfair description, but they're aiming to produce tools for building and managing unikernels: Unikraft [1], NanoVMs/Nanos [2], Unik [3]. Other orgs are producing unikernel-based OSs and VMs [4].

What is your toolset for building and managing unikernels? What have you learned?

Bonus question: is Unik dead? [5]

[0] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[1] https://unikraft.org/

[2] https://github.com/nanovms/nanos

[3] https://github.com/solo-io/unik/

[4] http://unikernel.org/projects/

[5] https://github.com/solo-io/unik/issues/172




I'd be interested to know more about what the developer workflow is like on these. Like, do you always build the application as a unikernel, and run it locally on qemu or vmware? What is it like debugging something like that? What is the story for incremental builds?

Or do you basically have to maintain a port of your software so that you can also run it on Linux with all the creature comforts of a normal system? If it's that, do you get weird bugs that only happen in prod and which are a gigantic pain to understand and work through?


They differ very widely. It's instructive to look at unikraft which is probably the easiest to develop on (IMHO). eg: https://github.com/unikraft/app-helloworld-cpp

Our unikernel (UKL) lets you link your Linux program with the Linux kernel and produces a binary which run in a VM or on baremetal (essentially a custom vmlinuz). It's a bit laborious at the moment, one of many things to fix before release.


As someone who works at NanoVMs you can run them locally but if you don't need to might as well run them natively (eg: not as a unikernel).

As for "maintaining for linux" - most people are deploying their software to linux servers so if you can develop on linux that's great but if you develop on windows or mac you're going to have to do that regardless if you're using unikernels or not.


MirageOS compiles as either a unikernel (for prod) or a regular ELF image for running as a Linux process (for dev). The dev build still only sees the world through the same ABI the prod build does, though (it tries to have the semantics of being a VM plus an emulator, WPOed/Futamura-projected together) so any restriction prod has, you'll already run into in dev.


[Unikraft maintainer]

Your question breaks down into two I think: 1). whether someone working on a typical "userspace" application exclusively and then decides to package the application as a unikernel right before deployment and 2). whether you are working exclusively on a unikernel-centric application, that is, you are making use of non-POSIX, internal (kernel) APIs in order to exploit better performance or better interact with your platform (e.g. Xen, KVM, etc.) or hardware architecture. Unikernels can achieve excellent performance as they have a very tight coupling with their surrounding environment.

In the case of 1), the developer workflow is the more or less the same in my opinion, you build your application in the comfortable way that you do before, without having to worry about the underlying kernel. There's two modes to this though. This is essentially whether your appplication is built using an interpreted language or a compiled one. In the former case, a python program is an apt example here, like using a framework like Flask where you build your HTTP APIs and work on application logic etc. In this case, you can use a pre-compiled unikernel which is tailored to run the python runtime. Unikraft makes these available actually[0]. :) When you are ready for deployment, you simply mount a filesystem with the source files of your application and point to the entry program as you would normally, like invoking the python3 binary with a path to your python program. When a developer is building some application which is compiled, e.g. with Golang, you can follow the same workflow as you would normally. However, when you are ready to deploy, you do something called binary rewriting. Here, you change invocations of kernel-space methods, mainly syscalls, to JMP instruction calls which are addressed inside of the unikernel binary. A final linking step puts the two together and you are then able to run your application as a unikernel. HermiTux does something like this[1] and Unikraft is soon to release some new tools to accomplish this too! Stay tuned! :) Unikraft aims to be POSIX compliant, and you simply select the relevant libraries and options you need to get the features your application needs to run. This way, you don't have to rely on maintaining two versions of your application.

In the case of 2). where you are building a unikernel-centric application, it's no more different than working with a compiled language. At least in my workflow, when I am working on some application or a new internal library for Unikrat, I just re-compile the source files I'm working on and then perform the final linking step to create the unikernel binary. I mainly use kraft[2] to help me acquire the relevant source files and libraries I want to use and work on. The kraft repo ships with a suite of docker images, and this for me is my main way of creating a dev environment where I have the right version of gcc, qemu, and all relevant tools for debugging too. I typically invoke this environment like so:

```

cd to/my/project/path

docker run -it --rm -e UK_KRAFT_GITHUB_TOKEN=<set token> v $(pwd):/usr/src/unikraft --device /dev/kvm --entrypoint bash unikraft/kraft:staging

# you are now inside a container

kraft list update

kraft init

# ..etc..

kraft configure # or kraft menuconfig

kraft build

kraft run # or use qemu-guest -k ./build/my_kernel

```

I hope this helps!

[0]: https://releases.unikraft.org/unikraft/apps/python3/stable/

[1]: https://dl.acm.org/doi/pdf/10.1145/3313808.3313817

[2]: https://github.com/unikraft/kraft


The definition of what a unikernel is needs to be narrowed down, a lot of these projects in the space (not all the ones listed above) have material differences that are not clear:

- some run only one language

- some require recompilation

- some essentially swap out libraries, others do something closer to dropping your already mostly static binary in a minimal disk image

- some build pid1 processes, others VMs images

Anyway, here are some additional entries in the space:

- https://ssrg-vt.github.io/hermitux/

- http://osv.io/

- https://github.com/linuxkit/linuxkit (more embedded/minimal VM than unikernel)

- https://nabla-containers.github.io/ (runs on Solo5)

I am going through using Linuxkit to build AMIs for cloud providers now from containers. I wouldn’t necessarily class linuxkit as a unikernel project because it doesn’t have the hallmark blurring of user and kernel space or kernel-as-a-library but you can customize the kernel so it’s an adjacent idea, and I think it’s the one most likely to be in actual use at non-hyperscalers.

[EDIT] Added OSv since it's on one of the linked lists but is a pretty large active player in the field.


https://github.com/unikernelLinux/ once we get around to releasing it, which should happen in a few weeks.

I agree with your point above that there are many different things called a unikernel, with widely differing scope and use cases.


If anyone is curious how Unikernels work and what the dev workflow is like, I had the creator of NanoVMs on my podcast last month at: https://runninginproduction.com/podcast/79-nanovms-let-you-r...

We talked about a mixture of Unikernels in general and how they run their infrastructure.

What's interesting about Nanos is it's POSIX compliant. In other words you don't have to write your app differently to get it to run in their Unikernel.


There’s a large subtle gap in various APIs between Linux and POSIX that can matter. That might be a barrier for various applications since Linux is kind of the default target in the cloud with Windows following.


I agree completely, and Linux runtimes generally meaningfully address the difference.

Nanos in particular though is written against Linux


I don't care 1 second about POSIX and don't have to rewrite the app differently, because I mostly use managed languages with rich libraries eco-systems.


Probably not the kind of answer you're looking for, but networking appliances (routers, switches, firewalls) essentially used to be "unikernels" in the early 90ies, particularly the original Cisco IOS. Which is also a great example that blurs the lines between embedded systems and unikernels…

That said, no modern router is a "unikernel" anymore, and the low end switches fall more into the embedded systems category.

(btw: out of curiosity, how would HN differentiate between an embedded system/RTOS and an unikernel?)


Embedded system: Has an OS designed to be flashed to firmware with no dependency on external storage devices and survive for a long time without updates/reboots.

RTOS: Any operating system that 100% absolutely guarantees IRQ response time or CPU scheduler time. An embedded system's OS may or may not be realtime. An RTOS may be possible on the standard PC platform.

Unikernel: User space program linked with required kernel code and designed to run in kernel mode; also possibly designed with the expectation of a hypervisor. The things that come to mind with this are Xbox and Xbox 360 games.

So all three of these to me are not mutually exclusive concepts. A unikernel program running in kernel mode probably doesn't need an OS, but there may be need for a hypervisor which the unikernel may use via some virtualized hardware type interface, and that may essentially be an embedded RTOS.


Forgive my ignorance, does this mean that when your user space program makes syscalls, it doesn't require the CPU to go into ring 0/"protected mode" as your program is already running in ring 0?


Yes; but furthermore, the "syscall" doesn't end up happening. You're really just writing a kernel driver calling kernel functions, with a libc-looking (or other-language's-runtime-looking) library abstraction (that can be optimized out at unikernel build time.)

Usually this library abstraction also has a userspace actually-does-syscalls backend you can build against, giving you a regular userland process, rather than a unikernel, for low-overhead testing. (In this case, if there's anything that absolutely must load into kernelspace, it may compile to a separate DKMS module, which will then communicate through some form of IPC with your userspace process.)

But all that complexity is just for testing; in prod, you just get a monolithic kernel with your code inside (that being either an existing kernel with your code being a driver; or your code being "the kernel", with the library being an exokernel framework.)


Great answer. Thanks


Many unikernel projects were ahead of their time. For example ClickOS [0] is ~7 years old but all its ideas still sound innovative. Someone could build an entire business on top of network function virtualization, using unikernels as an efficient sandboxing mechanism.

I’m not sure why unikernels have not caught on widely. I suspect their time has yet to come for some applications, but at least for NFV and sandboxing, I would bet on solutions using eBPF or XDP with WASM for sandboxing.

[0] https://github.com/kohler/click


I think there isn't a compelling reason - For the sake of argument lets just say there is a difference in overall system performance - lets say 20% for a workload you care about.

you get to change your tooling. I think the unikernel tooling for building deployable images is/could be superior, but its different.

your service is now running as it did before, but you've also taken a whole lot of risk onboard that this experimental operating system used by a few tens of people around the world isn't going to cause problems for you. that there wont be some library you want to use that isn't supported.

if your job is to prop up microservices or run websites, I dont see why you would do this unless you were really that bored and had no sense of responsibility.


Tooling is really the chicken/egg problem of ecosystems. Good tools come from the best devs, who have a higher chance of existing in big, active ecosystems. But most devs won’t join an ecosystem if it doesn’t have good tooling, so how can it become big and active?

An ecosystem needs its early adopters to be excellent devs, so the early community has a high concentration of them and therefore greater probability of developing good tools that will attract a wider audience.

That’s why I wouldn’t give up quite yet on unikernels. There are a lot of really smart people working on them. What the unikernel ecosystem needs is a “killer app” to demonstrate a flagship use case. I used to think Fly.io could do that, but they seem to have settled on Firecracker (as has Amazon, tbf).

Whatever happened to the team behind the unikernel company acquired by Docker?


Yes, the Solo team has been focused on service mesh/apis vs Unik. Unik is not a kernel implementation but a "orchestrator" of sorts.

One of the cooler things that we've found though is that there is a very wide misconception that you'll need a k8s for unikernels - that is simply not the case. When we deploy them to the cloud we use the underlying storage/networking layers that already exist - so we don't have to manage all of that. Unikernels remove a lot of the complexity that comes with container infrastructure.

When you deploy a Nanos unikernel we create a machine image, which if you are deploying to say AWS becomes an ec2 image and then the instance that spins up is that unikernel - there is no layer of Linux that you deploy something else to. I highly recommend that anyone that is remotely interested to just try it out - https://ops.city - it'll clear up any deployment/orchestration questions you have almost immediately.

In fact, speaking of AWS we just reduced the deploy time there by 66% so now you can build/deploy your unikernel to AWS in < 20 seconds. It's actually faster than deploying to Google Cloud and that remains my favorite place to deploy them.


Let's define unikernels as running under a hypervisor. If it's not, it's more likely to be called an embedded operating system.

I don't think unikernels are worth it unless you're running at massive scale. A realistic target audience is providers of serverless-style services, in which case you're taking a vanilla application someone else wrote and compiling it against a unikernel.


Let's define unikernels as running under a hypervisor.

I don't think this is a useful definition. Our unikernel Linux (UKL) can build a unikernel that runs direct on baremetal, and that's a fantastic feature and also quite different from embedded OSes.

unless you're running at massive scale

That's your opinion. We are intending to deliver unikernels to anyone who wants performance and/or real time guarantees. They don't need to be operating at scale.


I think you're wrong. Running a thing in a slightly virtualized environment doesn't suddenly change the nature of the thing.

I'd say embedded operating systems are more of exact opposites. A unikernel runs a single applications in a fully-capable system modern computer. An embedded OS does the opposite, being able to run multiple applications on a system that wouldn't be thought of as a "computer" (in the PC sense, not a literal one).


> Let's define unikernels as running under a hypervisor. If it's not, it's more likely to be called an embedded operating system.

Not clear why you define unikernels like that; it has no relation to running in a hypervisor or not. As a result your follow on statement is a bit of a non sequitur.


> I don't think unikernels are worth it unless you're running at massive scale.

"Worth" what? If deploying a microservice as a unikernel is just 'M-x deploy' (or a click, for mouse users) away, what's the cost?


For me, the cost would be the loss of ecosystem— log collection, monitoring, cores + rr recordings, being able to connect with gdb and/or sniff around the live system from a familiar userland, etc.

These concerns are reduced when you're at the thousands-of-machines scale since at that point you probably already have a somewhat hand-rolled solution for this kind of thing anyway.


We already solved those problems with containers (I know, half of HN doesn’t like containers and everyone should manage heavy VMs or bare metal machines just like our ancestors did). Notably logs and metrics are exfiltrated, detailed logging and monitoring largely obsolete debuggers for production (indeed, they aren’t even balle into the final image nor are they installable since the app user oughtn’t be root). These practices seem pretty portable to the unikernel model and they don’t require any hand-rolled workarounds or thousands of machines of scale.


Did we? I mean in prod I still frequently pop a shell in a pod, apk install some tool and reproduce issues outside of the app, sure I could trawl through gbs of istio logs or whatever but it would probably at least double incident resolution if I had no userland available on the machine with the problem…


My understanding of the best practices is that shelling into prod is a breakglass only. Developers need approval to get escalated permissions to shell into prod in the first place. Further, containers shouldn’t run as root (security) so I don’t know how you would install software anyway. Logs and metrics should similarly be queryable via some central log explorer service like CloudWatch, Splunk, Prometheus, or even kubectl+grep. You shouldn’t have to manually page through GBs of logs.

Our images are often pretty stripped down (coreutils at most, often just a Go binary and some certs), so there aren’t many debugging tools available.

This might make our time to resolution slightly higher, but it keeps our incident count quite a lot lower because we very rarely need to break glass in the first place (this means you have to establish norms for logging, instrumentation, and tests).


Curl to /dev/shm/ ;)


Logs work: https://nanovms.com/dev/tutorials/logging-go-unikernels-to-p...

Monitoring like prometheus and such work just fine, although we have a few customers using https://nanovms.com/radar as well.

Our developers routinely use gdb to debug various issues: https://nanovms.gitbook.io/ops/debugging .

You are correct though that there is very much a lack of traditional user-land and that is on purpose. A lot of people expect these things to be similar to an alpine-like distro and that is just the wrong picture to have. Think more heroku/serverless - it is all about the application. So you wouldn't ssh into your application. Yes, there is a underlying kernel but it is more about freeing the developer to think more about their application and less about deploying the server itself.


This isn't true of all unikernels. Ours lets you run regular programs like perf, gdb inside the unikernel. (Of course optionally, you can go "pure unikernel" if you want)


I'm not. For my use-case the disadvantages (worse performance, hell debugging, no modularity) outweighed the advantages (theoretical increase in performance)


We (Nanos) haven't done the work to put in a full suite of performance regression testing yet but in general we can run lots of workloads much faster.

Rust webservers, go webservers, mysql, redis, nginx <-- all of this has been clocked running a lot faster. Go webservers in particular, since our websites and such are deployed as such, can run ~2X faster on GCP and 3X faster on AWS.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: