Hacker News new | past | comments | ask | show | jobs | submit login
The next generation of serverless (fermyon.com)
103 points by kiyanwang on June 16, 2023 | hide | past | favorite | 90 comments



I remember vividly my intro to "serverless". I had been (ab)using Javascript and Node.js for a while, breaking the internals of Express.js and messing around with many things. Someone told me that serverless was "like Node.js but without server, you just write code for an endpoint" and it sounded cool. Just I didn't need it at the moment since I was happy with my servers (I'm the creator of npm's `server`).

I kept hearing about it in multiple places, and after many months/years I decided to dig deeper. I found out that "serverless" was just an instance of Express all along! My disappointment was immeasurable, I thought that you really wrote a JS function, executed by "3rd party servers" (ofc there's a server _somewhere_), or in worst case you just wrote a function with the signature of Express' middleware and export it, then to be run by their instance. But no, you actually had to set up the whole server and listen to a port and all, it was just more ephemeral than Heroku's already ephemeral servers.

It wasn't until next generation that "true serverless" came, with Cloudflare Workers and the such, where you truly write a plain JS function that takes a Request and returns a Response as I expected almost a decade earlier.


And we are still waiting on decent frameworks for this method of writing serverless services. I have 2 projects running like this with a cobbled together group of utils/helpers to make life easier. No routing layer or anything, that’s the job of SST/CDK/SF/etc, but things like a handler wrapper to handle errors uniformly, and other tools I regularly need (permissions checking, payload validation, etc).

I know some of that work can be done at the API Gateway/Lambda layer before it hits my function but I’ve yet to go down that rabbit hole, it always felt too limiting/rigid, I could and do move way faster in JS (TS).

I feel like we are on the cusp of it getting so much nicer and I have a blast working in this space. I absolutely love knowing my backend will “auto-scale” to whatever I need all the way down to $0 and even on the high end (at my scale) well under $10. My projects are for events (think festivals or “food week”) and so they are extremely bursty. They go months at a time with little to no traffic so “Serverless” and managed services that scale to $0 or near $0 are awesome for my needs.


I could almost imagine WASI being born of the disappointment people felt when adopting the first generation of things branded "serverless".

"What if this stuff was actually as good as it first sounded? Want to make it?"


This sales pitch ignores cloudflare workers, which sound pretty similar to me, with both webassembly and key-value storage.

The distinguishing part of the articles option seems to be that webassembly functions are uploaded in an OCI container, which I don't think anyone else supports, and I don't think was designed with this use case in mind?


> The distinguishing part of the articles option seems to be that webassembly functions are uploaded in an OCI container, which I don't think anyone else supports, and I don't think was designed with this use case in mind?

OCI is just the storage and registry format, and is being used for all sorts of things (e.g. you can store OPA policies for conftest in it). I'm not familiar with Fermyon's internal workings, but my read of it was that they simply use the storage format, not that there's an intermediate container layer.

Edit: can confirm, there are no containers: https://news.ycombinator.com/item?id=36352869


If its "just storage", how is it any better for this application than a zip file?


OCI is a standard that supports layers (and cache thereof), hashes and has a verified way of sharing (via OCI image registries).

A .zip is much more monolithic, and there are no ready-made .zip registries you can upload your .zip to that will automatically checksum it, or allow it to be delivered in layers (and skipping the layers already present on the registry/destination).


Oh no they didn't ignore them. They included them with the slow, locked in, poor developer experience first generation providers but also with a "more limited fashion" disclaimer.


I dont think they are going to run the web-assembly in docker. That would not change the status-quo at all. Already now aws lambda, which executes container has a longer cold start then the zipped code.


Google App Engine came out in 2008, several years before AWS Lambda. I'd also argue the developer experience (especially for its time) was pretty fantastic.

If you're wanting 'serverless' compute these days you'd probably deploy to something like Cloud Run – containers being the ultimate hedge against vendor lock-in.


I would love to use Google's cloud, but I just can't risk my email, map, and browser services being cut off because some AI determined that my application looks suspicious, with no human customer support to contact, as has been reported multiple times.


Actually I find their support to be pretty good. Granted I am at a large tech company who likely pays for premium support, but I can always get through to a human who knows what they're talking about. They've even helped with issues inside my app that were my fault.


You could probably profitably resell that kind of access to people who lost everything and have no other way to reach them.


Phone support is tied to an account. It doesn't work on random accounts.


I tried moving off Gmail, I really did. But all the EU-based alternatives just suck so much. So now I'm back to Gmail, but I pay for it (Google Workspace). This way I at least have a commercial relationship with Google which gives me various contractual rights, and access to phone support. I also have a contingency plan: I have my email address on my own domain, and I backup my mailbox once in a while, so that I can migrate off Gmail should I need to do that at some point.


Have you considered Zoho? It's a company based in India, but they have European servers.


We tried zoho. Had to move as the servers reputation was awful. Maybe you get your own ip if you do enterprise?


I only use it for my personal mail with my own domain


I don't know when Heroku launched, but that's where GAE fits in the summary to me - it's 'serverless' in the 'not managing a server' sense, but there's some conflation (at least in the article, but generally too I think) with a 'function as a service' model, which is inherently a subset of serverless I suppose, and where Lambda sits.


Heroku was founded in 2007 by Orion Henry, James Lindenbaum, and Adam Wiggins.


I’m a big fan of Cloud Run.

Add a Dockerfile Create a service and link to GitHub Push to main Done

Sure there are all the usual risks of using Google services (might go away, might get locked out) but that container makes it reasonably quick to get back up and running.


I see that there's some skepticism about running WebAssembly in containers and how it constitutes a next-gen serverless solution. It's important to note that the use of WebAssembly here is not just about the runtime environment but also about the features it brings. WebAssembly binaries can start up significantly faster than traditional VMs or containers. They also have a strong isolation model and security sandbox that allows running multiple tenants in the same supervisor, which can lead to reduced costs and better utilization of resources


Java servlets in application servers, and IIS CLR Handlers, WebAssembly sales pitch is so 2001.


Except you run a separate tomcat instance per app or better yet, embedded tomcat anyway.

Source: Someone who runs two dozen Tomcat containers.


Doesn't change the fact of the "clever" Webassembly marketing.


I know I am being sold something, but this is a very convincing post IMO. I enjoyed the historical perspective, and all the mentioned downsides of serverless up until now really resonate with my own experience.

Seems like these people understand their problem space very well. Good stuff!


I feel like AWS (and Lambda in this article) doesn't get enough credit for a bunch of revolutionary things they did.

Don't get me wrong, I bitch about them every day, but credit where credit is due :p


I've been dealing with serverless since 2018 and have a company with a 100% serverless and open-source product (webiny.com). I'm not sure I fully agree with your 4 issues and the fact that Web Assembly is the answer.

"Serverless functions are slow" -> not really, only if designed poorly

"DX serverless functions is sub-par" -> where's your proof, again you'll have bad experience as a developer only if you don't know what you're doing. Which I see mainly from people trying to approach building serverless applications by having a container-like mindset and that leads them to bad design choices.

"Serverless functions come with vendor lock in" -> I think most of us are beyond the point of that vendor locking is bad choice. Worse choice is picking a sub-optimal technology with lower performance, higher cost and lesser reliability.

"Cost eventually gets in the way" -> Again, only if you don't know what you're doing and make bad design choices.

When it comes to Web Assembly, I don't see how this is a better choice of technology vs something like Node. In node I have a much wider support of the technology than WA (talk about vendor-locking), I have a proven eco system of libraries, knowledge and a much bigger talent pool to source from. The cold start issue you mentioned on your website, I can tell you first hand, the cold start is not really that big of a problem, not big enough that you would want to switch to a different technology and there are many ways to mitigate the cold start problem.

Just saying, I'm far from convinced that there is a benefit in switching. I would love to see more detailed benchmarks and examples I would be able to replicate than just statements in a blog post.


If you have Node apps running in Lambda and are happy with the architecture, cost, and operating model: great! You've done good work and/or are very lucky and there's no need for you to rip everything out and start over.

Heck, even if you're curious about WASM and want to try some experiments with (say) fast Rust crypto libraries or embedded database engines w/o the risk of flaky native code crashing your V8 runtime: again, you can just run WASM from a Node worker thread and keep cruisin'.

For those of us who _don't_ have a huge investment in Node, have hard requirements around e.g. memory usage, cold start times, or even just plain old _cost_ (which can become a major factor when you consider the AWS lock-in) that Lambda doesn't meet really benefit from another option.

Your good fortune in finding a stack that works well doesn't mean that folks who have different needs or constraints are dumb, ignorant, or lazy.

As an aside, I think you also might be underestimating the depth of experience and knowledge of the Fermyon crew when it comes to containers, cloud runtimes, and serverless development. This is substantially the same team that built Helm, and a lot of other Kubernetes and cloud-native ecosystem projects along the way.


I can believe the other points, but the WASM hype train has officially jumped the shark.


This pivots from serverless applications hosting to... a database?

I mean just sell me on the wasm in the cloud premise, that's sounds awesome enough.

The k/v store angle is just confusing. If devs don't want to manage their Dev environment, you can spin one on any cloud. Especially serverless services, which are billed per api usage, are not going to break bank if each developer has it's own Dev backend ok the cloud.


>and then invoked the CGI program directly. There was no security sandbox, and CGI was definitely not safe for multi-tenancy

This isn't true. Linux is the security sandbox. Multitenacy is safe using a user for each site.

>Like CGI, PHP was never multi-tenant safe.

This is isn't a problem with PHP. The following story about the author's site on a shared host getting hacked was a problem of shared hosts not caring about security.


> This is isn't a problem with PHP.

I'd argue it is, since PHP's common runtimes expect you to cross user boundaries all the time:

- as Apache2 module, PHP runs as the same user as the web server, so the web server needs to have write access across all tenants.

- FPM recommends running as a TCP socket, letting tenants freely access other tenants' PHP processes. Unix sockets can solve that issue with carefully permissions, but the documentation barely mentions that use case.

Containers are the minimum security boundary.


>since PHP's common runtimes expect you to cross user boundaries all the time:

No they don't. Just because a default installation is single tenant that doesn't mean shared hosts can't configure it to be multitenant. It is simple to set up FPM to have a pool for each site use its designated user and have it chroot.

>Containers are the minimum security boundary.

Whose security boundary is also the kernel. It's not that much different.


> Whose security boundary is also the kernel. It's not that much different.

That's the whole point: If you have to chroot/container everything because it violates regular permission models too much, PHP is not multi-tenant capable by itself.


Speaking of not caring:

"We have things like protected properties. We have abstract methods. We have all this stuff that your computer science teacher told you you should be using. I don't care about this crap at all." -Rasmus Lerdorf

"I really don't like programming. I built this tool to program less so that I could just reuse code." -Rasmus Lerdorf

"I was really, really bad at writing parsers. I still am really bad at writing parsers." -Rasmus Lerdorf

"I'm not a real programmer. I throw together things until it works then I move on. The real programmers will say "Yeah it works but you're leaking memory everywhere. Perhaps we should fix that." I’ll just restart Apache every 10 requests." -Rasmus Lerdorf

"I don't know how to stop it, there was never any intent to write a programming language [...] I have absolutely no idea how to write a programming language, I just kept adding the next logical step on the way." -Rasmus Lerdorf

"For all the folks getting excited about my quotes. Here is another - Yes, I am a terrible coder, but I am probably still better than you :)" -Rasmus Lerdorf

"PHP is just a hammer. Nobody has ever gotten rich making hammers." -Rasmus Lerdorf

Ian Baker's PHP Hammer:

https://blog.codinghorror.com/the-php-singularity/

PHP: a fractal of bad design:

https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/

>I can’t even say what’s wrong with PHP, because— okay. Imagine you have uh, a toolbox. A set of tools. Looks okay, standard stuff in there.

>You pull out a screwdriver, and you see it’s one of those weird tri-headed things. Okay, well, that’s not very useful to you, but you guess it comes in handy sometimes.

>You pull out the hammer, but to your dismay, it has the claw part on both sides. Still serviceable though, I mean, you can hit nails with the middle of the head holding it sideways.

>You pull out the pliers, but they don’t have those serrated surfaces; it’s flat and smooth. That’s less useful, but it still turns bolts well enough, so whatever.

>And on you go. Everything in the box is kind of weird and quirky, but maybe not enough to make it completely worthless. And there’s no clear problem with the set as a whole; it still has all the tools.

>Now imagine you meet millions of carpenters using this toolbox who tell you “well hey what’s the problem with these tools? They’re all I’ve ever used and they work fine!” And the carpenters show you the houses they’ve built, where every room is a pentagon and the roof is upside-down. And you knock on the front door and it just collapses inwards and they all yell at you for breaking their door.

>That’s what’s wrong with PHP.


I mean, I get that you don't like PHP. Which is fine, we need people to keep the other communities humming as well.

IMO, PHP is better than any other language I have used, so I actually respect it.

The fact that the creator is open about his figuring it out along the way is pretty cool in my eyes - after all, he started before there was anything, [Alta Vista, Lycos, and a lot of friendly geeks trying to figure out how this was better than usenet?] and he made something which is very popular and really works very well.

Imagine what would happen if Vitalik was as honest as Rasmus ;)

My 2c.


I think Cloudflare Workers belong pretty high up the list on this one.


A very timed response to Wasmer Edge release yesterday! [1]

Even though I don't agree with their take, I have to congratulate their team for the fast response. At the end this is what startups are about.

https://wasmer.io/posts/announcing-wasmer-edge


If you go to the bottom, underneath the author's name is JUNE 06, 2023. So I guess it was the other way around. Makes sense because 3 regions don't make an edge offering, so WasmerEdge must have been rushed out there.


I didn't realize of the date, thanks for the correction. I guess my compliment doesn't make that much sense either.

Indeed, we have a rush for shipping great products to the world (although Wasmer Edge is been more than a year in the making!)


At some point they will "invent" RPC / CORBA / EJB / Vitria BusinessWare / your favorite ancient pet tech.


We use AWS Lambda as a monolithic "serverless" to run our Django stack since Lambda can now take standard containers[0].

The full environment, costs us US$100/mnth to run production b2b application. Traffic is quite light, but we literally don't pay for non-usage. The most expensive thing of our monthly bill is the postgres RDS+Elasticache. So for our use case it is extremely cost effective, esp for UAT, Staging where there's no traffic and using the smallest machines possible <US$50/mth.

To me this is the "next generation" of serverless. Back to monolith but where we don't manage even CPU time. We get all the benefits of monolithic repo, debugging, deployment and key benefits of serverless namely pricing+CPU management. It's not edge lambda but admittedly that's not a need of ours.

[0] https://stackoverflow.com/questions/69512271/will-the-cold-s...


But cold start time. This article claims their cold start time is only one or two milliseconds. Lambda's cold start time is orders of magnitude slower.


This is fair, if WASM can do 1-2 ms cold start then that's incredible!

If you have a 100 function chain w/ 100ms cold start times in between, then that's a nightmare, but for us, we're monolothic so all our cold starts are bundled together. In the end I can't truly benefit from the 1-2ms. I suspect so many teams are in the same boat. But I do lose a lot w/o the debugging and ease of testing that come with monolithic applications.


What does your cold start time look like with containers?

I was contemplating deploying a similar stack for very occasional internal use: django + postgres rds (no need for cache).

Had some horror stories with Java where the startup time was brutal, so they ended "pre-warming" lambdas. In the end, the whole ordeal cost more than an EC2 running 24/7.

This was several years ago, so I'm curious whether things have improved.


This does have to be managed, empty Django is ~3.5s ours is <5s.

Heavy/slow libraries aren't loaded in the web via settings file, but are loaded in background workers. We've found that image processing libraries and PDF handlers are slowest to load.

We use 3 minute crons to keep a set of concurrent workers alive (pre-warming as you describe). The number of requests that hit full cold start are <0.5%.

But yeah, not all frameworks can be cold-started in Lambda well. Rails can work. We built our Rails app before lambda could take containers and the coupling was too tight w/ slow libraries to move it into Lambda w/ any reasonable start time, like 20-30s. Just too expensive to decouple and get it into lambda, wasn't worth whatever CPU savings we would gain.


> We use 3 minute crons to keep a set of concurrent workers alive (pre-warming as you describe). The number of requests that hit full cold start are <0.5%.

Does this mean you have a cron job just pinging the serverless function every 3 minutes? I'm curious how much this adds on to your costs. It means that the whole "don't pay for non-usage" thing is not quite true, but maybe it's still significantly cheaper than running an EC2 instance or whatnot. I'm curious about the cost calculation here.

Another thing I'm curious about, since you have a container-based deployment, did you compare with Fargate? It's another "serverless" solution I've been looking at lately and trying to compare with the Lambda approach. As far as I can tell the downside is that it's hard to scale down to zero like with Lambda, but the idea is that it supports long-running tasks instead of having to set up complicated Rube Goldberg machines with Lambdas. Unfortunately I was a bit disappointed to discover that it doesn't support GPU.


> Does this mean you have a cron job just pinging the serverless function every 3 minutes? I'm curious how much this adds on to your costs. It means that the whole "don't pay for non-usage" thing is not quite true, but maybe it's still significantly cheaper than running an EC2 instance or whatnot. I'm curious about the cost calculation here.

Yes, specifically it kicks off a Lambda function that does a parallel GET to our website at a special endpoint that has a 100ms "wait" and basic DB call. This keeps the lambda process alive/in-memory.

To keep a function alive costs ~125ms (100ms wait + 25ms full func roundtrip). every 3 minutes. ~0.041% of 1x CPU time. Our website server costs are tiny and lower for Staging and UAT. Benefit - can scale to 1000x (AWS Limit) servers at the speed of your cold start time.

But if you have a heavily used website, Lambda is not cost effective at all.

> Another thing I'm curious about, since you have a container-based deployment, did you compare with Fargate?

Yes we use Fargate for our core product which is built in Rails before containers could be deployed in Lambda. Rails works fine on Lambda[0] but the transition cost wasn't worth it for us. Fargate is great, but as you point out it is expensive if your application isn't a user heavy one like ours. To be highly available, we always have a minimum of 2 online but we're a b2b application so our night usage, 10pm-6am is zero. But I have 2 machines just sitting there. This is why i love Lambda >> Fargate.

Also, scalaing Fargate machines is slow if you get a traffic spike.

[0] https://github.com/rails-lambda/lamby


Honest question: When would I want to use Lambda containers vs ECS Fargate?


Two scenarios I have used them for:

1. Short running batch jobs activated by events i.e. new S3 file, entry in a dynamic db database. The start up time is not a problem for these.

2. Serverless api sitting behind an api gateway. Again the startup latency is no problem.

I used .NET core for these and the startup latency was never a problem. Haven't tried node, I imagine it might be an issue there since the runtime might have to hoist all your javascript.


Primary - infrequently accessed application that needs to scale when it does get users.

If pricing isn't an issue for an infrequently access application, Fargate is great or if it's very frequently used, Fargate is also great.


Why did you switch your django application to be serveless, and aside from pricing did you see performance improvements?


Sorry really late to the party.

Price + not worrying about scaling were the two driving factors.

In our Rails Fargate application, we get spike usage which is really annoying to manage if there's not enough "room". Sometimes Fargate containers will simply get killed if they're getting hammered to often and Fargate start time takes minutes.

Overall performance is almost on par with dedicated servers. One area it lacks is if Lambda decides to spin up a new instance, we get a coldstart time ~5s. It's not ideal for sure, but for our purposes worth the tradeoffs.


> We imagine a feature in which a user can log into their dashboard and say, “This function is misbehaving. I would like to enable function tracing right now.” The serverless function runtime can then immediately begin tracing without any sort of recompile on the user’s side.

So like dtrace or Frida or Intel Pin? Or perhaps like perf? Or good old GDB? It’s hard to say without a more specific description of ‘function tracing’. Perhaps it’s an entirely new user interface design that provides benefits never seen before. I could easily believe that, considering how often debugging tools have terrible UIs, don’t work in modern languages, or just don’t fit modern server workloads very well. Yet whatever it is, I would be surprised if it could not have been implemented for native code using the same underlying techniques as those tools.

That doesn’t mean it should have been implemented for native. I think WebAsssembly is fine to base a platform around, really. Has its benefits, like hardware and OS independence, and being designed to run isolated components, and having its ecosystem designed from scratch without some of the mistakes of the past.

But why are its proponents constantly ascribing to it supposedly unique capabilities that really aren’t?

Even the cold start and memory benefits could largely be replicated for native code if you write a custom kernel. Sure, it’s very useful to be able to just run under an existing OS. But still, people act as if lightweight isolation is fundamentally impossible without a JITted platform like WebAssembly, or without putting everything in the same address space, when it is not. At most, the use of software bounds checks can reduce context switch times somewhat more than you could get with a custom kernel, but at the cost of potential Spectre skeletons in the closet (something I want to try attacking myself some day). And native context switch times are already a few microseconds at most.


> Even the cold start and memory benefits could largely be replicated for native code if you write a custom kernel ... people act as if lightweight isolation is fundamentally impossible without a JITted platform

I think when most people refer to this as being "impossible", they mean practically impossible, not dis-allowed by the laws of physics. "First write your custom kernel" is a step only a miniscule proportion of projects can justify.

As to unique capabilities - none of its capabilities are really earth-shattering in and of themselves. But together and combined with the fact the barrier to entry is so much lower than alternatives like custom kernels, it's quite understandable why people see it as bringing something new and exciting to the table.


I don't understand why we need a 'custom kernel'. Have your serverless server give you the specs for whatever the heck they want to run and compile against that. Why do we need a new kernel?


I could have better explained what I was talking about. The point of a custom kernel would be to get extremely fast startup and context switch times, and extremely low per-process memory overhead, by doing things that Linux doesn't do - either because they're unnecessary or unhelpful for typical Linux use cases, or because they require making stronger assumptions about what userland needs to do. As a relatively flashy example, if you can assume that processes don't need overlapping addresses, you can share page tables between processes and rely on PCID/ASID to ensure each process can only access its own memory. [3] But there are also simpler things like - on Linux, starting a process requires forking the parent process, creating copies of its page tables, only for `exec` to immediately throw them away. Even posix_spawn is implemented on top of vfork. That's fast enough for Linux, but it's unnecessary. A custom kernel that doesn't provide Unix compatibility could dispense with it.

To be fair, none of that may be necessary. This blog post is advertising 1-2ms startup time. Based on a little benchmark [1] I just ran, on my desktop, Linux can execute a trivial process (/bin/busybox true) from start to finish in 0.4ms. A real application executed normally would obviously take much longer to start up. But I think their 1-2ms number is based on snapshotting the process after initialization, so a fair comparison would do that on native as well. Full process snapshotting for native processes is something that has been done before ('unexec', CRIU) but admittedly isn't very common, and I don't have a good way to test it. With snapshotting, there won't be any code executing in the process at startup, so most of the initialization time would be eliminated, but the mere act of loading more memory into the page table would have some cost. But probably not very much.

However, Fermyon uses Wasmtime, and Wasmtime's blog post on snapshotting [2] advertises single-digit microsecond startup, which is pretty cool. Starting native processes in single-digit microseconds is out of the question under Linux. But a custom kernel might be able to get there.

Edit: I guess I should add something. I'm not saying a custom kernel should just be table stakes. Writing one is obviously a major commitment and requires a high level of expertise. But you could say the same thing about JIT runtimes such as Wasmtime. There is even some overlap between what they're doing (i.e. loading binaries and managing processes). A kernel and a JIT are still fairly different projects, but they both require similar types of expertise, and I think the level of complexity is also comparable.

[1] https://gist.github.com/comex/0402dc95afb1c2ca3076d5b1b64bcc... [2] https://bytecodealliance.org/articles/wasmtime-10-performanc... [3] https://stackoverflow.com/questions/73753466/how-does-linux-...


Is WebAssembly even fundamentally different from Java/JVM or C#/CLR? I like WebAssembly but I don't think it brings anything new except that it can kind of run in a browser, but so could Java and C# for a brief period of time.


Was this blog post written by chat gpt?


I'm quite the serverless skeptic, but honestly this sounds quite appealing.

I particularly like the definition of serverless given at the beginning of this post. It's refreshingly straightforward and realistic, without the marketing BS that the cloud providers tend to give.

I've always felt like developing for serverless hosting is like writing a plugin for a piece of closed-source software, and that sucks from a dev experience and lock-in point of view. Doing for serverless what CGI did back in the day seems like a good idea, and containers/WASM seem like good options for it at the moment.


The author's definition of Serverless is pretty different from what I understand.

It's called Serverless mostly because it's not billed based on the # of servers, instead it is mostly billed by # of requests and/or the CPU time * dedicated RAM. With this changed billing model comes with the expectations like relatively fast scaling up/down and "scale down to zero/minimum".

Disclaimer I work on App Engine which is arguably the OG Serverless service.


> Serverless apps are applications that are not written as software servers. “Server-less” code is code that responds to events (such as HTTP requests), but does not run as a daemon process listening on a socket. The networking part of serverless architectures is relegated to the infrastructure.

The networking part was always and always will be relagated to the infrastructure. Really not sold to the all "serverless" fiasco.


Most apt use cases imo are not part of the "main loop" of a webapp but instead to supporting functions, like sending an email, resizing an image, any sort of cron, etc. Or, the micro part of microservices.


To me, that always feels like it should be close to my app, as in, part of the code base. Serverless functions always seem to be very remote from everything else, particularly if I’m not sold completely to AWS. I end up with web and worker containers running in the same deployment plus some sort of queue, which is also not very satisfying. Oh well.


What about file system? AWS Lambda allows working with the FS and running child processes. I assume this is not the case when using WebAssembly sandboxes?


This spec describes how to work with the filesystem in WASI (WebAssembly System Interface). That's the spec adopted by Wasmtime, which is the runtime being used by Spin. (Fermyon employee here)



I'm not convinced on their solution. They say you should run Web assembly in containers? How is that next-gen serverless?


Mikkel from Fermyon here.

That's a misunderstanding. Spin runs your webassembly directly from disk. Spin does support distributing application using OCI registries, but they do not run in a container runtime, just a webassembly runtime (Wasmtime).


Ah, I see, thanks!


At the risk of sounding very green, what is the best way to learn serverless/cloud? My decade or so of experience in the industry has been solely application development in C and C++. I've some personal experience in server side programming with PHP. Any time cloud concepts come up, it all seems very foreign to me.


I work for Fermyon, but have been at a major cloud provider in the past, working with cloud for a decade now.

Spin and Fermyon Cloud is a really nice and easy way to get started: https://developer.fermyon.com/cloud/quickstart. It doesn't introduce a whole lot of the inner workings of a cloud. With a lot of other services you'll fairly quickly get to the point, where you'll have to dive in to concepts of distributed systems and infra.

If you want to work with containers and servers in the cloud, fly.io is really easy to get started with as well.


I was nodding along until key-value storage, which seems to contradict its own points about vendor lock-in and APIs?


I could be wrong but key value is so simple there is nothing to lock-in


Fermyon employee here. There's a WASI specification in the works, which is what is used in Spin and Fermyon Cloud: https://github.com/WebAssembly/wasi-keyvalue/

With Spin you can also swap the backend provider. It's using sqlite as the default, but there's a Redis provider as well: https://developer.fermyon.com/spin/dynamic-configuration#key...


Why on earth would you want to commoditize your platform?


If it's a viable option, why not give users the benefit?


This going to be a naive question due to my lack of familiarity with WebAssembly, but what does it it being to the table that wasn’t possible to do with any other standalone program packaged as a binary?


Strong isolation guarantees, mainly, which let hosters (like the OP) resource share more densely and with better startup profiles than virtualisation gives you.


> PHP was never multi-tenant safe

You're running on the same host, so there might be exploits that allow you to break free, but that goes for most other "serverless" solutions too.


My wish for the next generation of "serverless" is that they find a better thing to call it than "serverless".


Can someone please elaborate on how this differs from Firebird, which also seems to be a lightweight VM that can spin up very fast


Are you thinking about Firecracker (https://firecracker-microvm.github.io/). If so, one of our colleagues (I work for Fermyon) did touch upon this in a talk at Cloud Native WasmDay recently: https://youtu.be/-cJ_Cn_mgqI?list=PLj6h78yzYM2Pdj8vnO0wfFyKc...

This talk is actually a really nice compendium to the blog post.


Can you please provide more information? I only found Firebird SQL.


This generation of serverless ushered in the next generation of monoliths.


Hardly new, ever heard of Java Servlets or .NET IIS Handlers?


Since it sounds like very few folks in this thread have actually fired up Spin and built a thing, I thought I might offer a few observations based on my experience working on a little app over the last 2-3 weeks.

First, the good stuff:

- As a Rust developer, the SDK tooling is really good. I went from running the installer to generating a templated app to dropping in my own app logic in about 15 minutes.

- Local development iteration is also very fast and low-friction. Building everything as small WASM components means you effectively get hot module reloading for free, without needing to shim in a bunch of extra dev-mode-only code loading logic (with all the inherent incompatibilities and heisenbugs)

- The component reuse model feels a lot easier for me to reason than interminable chains of Express middleware (YMMV, of course)

- You get actual static (WASM) binaries, which can be as small as your compiler makes them. I'm normally pretty thrilled to get a production app container down to <100MB, and the app I've been working on, even without really paying any attention to my dependency graph, is around 2MB. Storage may be cheap, but pulling ~GBs of new container layers for a given release or deployment isn't free or instantaneous.

There are some less-great things:

- The "outbound" DB adapters are _very_ bare-bones. You only have access to a small list of datatypes, and you can't really layer higher-level libraries on top b/c the APIs aren't compatible with standard backend drivers. (Furthermore, if you want to use e.g. MongoDB, Clickhouse, Dynamo, etc., etc.: sorry, I hope you have some sort of HTTP adapter lying around ready to use.)

- WIT (WebAssembly Interface Types) is a cool _emerging_ standard, but like a lot of WAS* community stuff it's still in a fractured "draft" state, and many of the actual interfaces exposed in Spin are unique to their runtime. (This also means that _extending_ something like the PG bindings requires a ton of indirection code spelunking; I gave up trying to add a few new native types after realizing I would also have to support them for MySQL and every other backend DB.)

- The development environment, while relatively complete and usable on its own, doesn't really play nice with Nix or other high-level dev environment tooling. (There are literally invocations of `rustup` inside the top-level `build.rs` for the project, so good luck providing your own paths for e.g. `rustc`.)

On balance, I'm enjoying the experience thus far and starting to think about more thing-ish was to apply the tools. I'm also looking forward to trying out more of the self-hosted infrastructure stack. I'm not opposed to paying for managed hosting, but I do occasionally build things that need to run in "offline" or air-gapped environments, so being able to bring up a full hosting environment is pretty clutch.


If you really want serverless, there’s this thing you can write called a “native application” that, once deployed, a user can interact with entirely without a network connection!


Oh, like electron? /s




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: