Hacker News new | past | comments | ask | show | jobs | submit login

It's kind of funny, looking at these halting discussions as a backend web programmer. At the heart of it, all I do all day is try to write nonterminating programs in an environment where the entire universe seems hell bent on terminating the event/polling loop. It's kind of perverse when you think about it.



The nice thing is that you can actually build stuff like web servers in a total language :) Using something called co-induction a total language can model things that are potentially infinite, e.g. a web server. Co-induction replaces the requirement that every function must terminate with the requirement that the next step (e.g. the next value of a stream) can be calculated in finite time. So, the event loop could be modelled using co-induction as long as long as the event handlers do not contain infinite loops.


why is that complementary instead of straightforward induction?


The style of coinductive reasoning is very different.

In general, in induction, one has a base case and an inductive case. Hence, you can only build "finite" objects (since you start with the base case and you can apply the induction rule a finite number of times)

In coinduction, you start with an infinite object, and you show that you can "progress" - that is, take a step in your computation. So, this shows that you can _reach_ states in finite time, even if the full structure is inaccessible, since you show progress on each state.

The phrase that I've heard is "induction is to prove what's true, coinduction is to prove what's not false"

Please do Google "practical coinduction" it has a wealth of examples.


The usual argument given for wanting programs to halt is that if you can't solve the halting problem for your language you can't prove anything, so we want languages where we can prove things.

But you don't have to have programs halt to solve that. It is a perfectly valid solution to the halting problem that all your event handlers halt and your mainloop doesn't halt (or only halts on receipt of a signal, and never on receipt of a request, or something). And then you have your ability to prove things back.


The web is completely transactional. You want nearly all your code to terminate (and fast), it's just some small layer that will call the terminating part that you want to run forever.

Thus, you will almost certainly gain by annotating the non-termination at the main loop type, and have the compiler tell you about both any possible bottom at the rest of your code, and any possible termination of the main loop.


Modern/future serverless paradigm models services as functions for example https://docs.datomic.com/cloud/ions/ions.html


That's why I love containers... you definitely write your code to not leak resources, but if there's any unexpected state, your recovery strategy is to report the error, bail, and the whole container is blown away.


You shouldn't need something as "big" as containers for that - ordinary user processes would work. (Erlang takes this model even further, to actors within a single unix process).


That's a nice in theory, but it's insufficiently imaginative. The unexpected state can be out of memory, out of disk, persistent network failures, etc.

And you need to be able mitigate bugs that make it to prod. Okay, your code never lets bugs into prod, but your dependencies may have issues, or you might have an unexpected conflict with the underlying OS. If you can limp along with a buggy configuration that's causing some containers to go down, you can often get a patch in place without having a total outage.


> That's a nice in theory, but it's insufficiently imaginative. The unexpected state can be out of memory, out of disk, persistent network failures, etc.

Well sure, but if that's your threat model then don't you need to restart whole VMs/machines rather than individual containers? I've always felt like containers fall between two stools (at least for languages/ecosystems that have a good way of packaging programs with dependencies - I appreciate that they make sense as a deployment solution for e.g. Python): their isolation guarantees aren't as reliable (at least in practice) as full VMs/unikernels, but you pay most of the price in terms of not being able to use normal IPC mechanisms.


> Well sure, but if that's your threat model then don't you need to restart whole VMs/machines rather than individual containers?

You sure do, but you pay someone else to deal with all that nonsense. And because you can share those VMs, it's cheaper.

> I appreciate that they make sense as a deployment solution for e.g. Python

Any language-specific solutions like .war files have no answer for binary dependencies or utilities provided by a package manager, and they're entirely right not to try and solve that problem. You also occasionally will get a third party that requires you run their awful utility... you may be seeing where my skepticism of "we'll just write good code" comes from.

The other advantage is your deployment artifact has a single identifier, so it gives you a much nicer story for rolling back a bad deployment.

> ... but you pay most of the price in terms of not being able to use normal IPC mechanisms.

If I had computing demands that warranted IPC via shared memory, I'd probably agree the overhead of containers is too much.


> You sure do, but you pay someone else to deal with all that nonsense.

I don't think I follow? I've got a class of problems where I might want to restart my application, and a class of problems where I might want to restart the VM/machine. I don't see how adding containers in between the two helps.

> Any language-specific solutions like .war files have no answer for binary dependencies or utilities provided by a package manager, and they're entirely right not to try and solve that problem.

JVM culture steers away from depending on binaries on the host system; in the rare cases where you want JNI then something like https://github.com/mrburrito/jni-loader is the standard approach IME. I think I've used literally two system libraries in my career, and both were standard enough that I was content to include them in the OS baseline. Appreciate that other languages do approach this differently.

> The other advantage is your deployment artifact has a single identifier, so it gives you a much nicer story for rolling back a bad deployment.

I don't see how this is different from any other kind of deployment?


If you have pairs of processes that communicate over IPC you could just build them into the same container




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: