Hacker News new | past | comments | ask | show | jobs | submit login

repl.it looks awesome. It'd be interesting to hear more about how you're doing provisioning and orchestration. Are you running a Kubernetes or DC/OS cluster by chance? Are you spinning down idle instances after some time? What actually happens behind the scenes to make a deploy go live? etc

And then some more general questions - Can I hook a CI into the deploy loop? (Maybe that doesn't quite make sense given the model is something like Jupyter notebook meets Glitch.) Also, is there a repo being managed behind the scenes like GitHub does in Gist, and if so, any plans to open access to those?




Great questions and we intend to write about this more in the future. We had to build our own container orchestration mostly for speed and customizability.

A bit of context: for every language/environment we have a Dockerfile (naturally) and a JSON configuration that describe how it runs, how it install packages, how it runs unit tests, how it formats code, etc. When we build the container we insert a program that we call pid1, it's the container's interface to the rest of the world.

The container manager creates pools of these containers with some rudimentary predictive logic to make sure we have enough containers to deliver on our promise of "loads in 2 seconds". When we take a container out of the pool, if we're reviving a container, we mount a GCS-backed fuse filesystem with the user code (it needs to be backed by GCS to handle persistence, say you're writing to a log file, it should be there next time you load your project). We then send the relevant setup command to pid1 (either init, or wakeup) which sets up the repl to start the user app, the repl, or what have you.

> What actually happens behind the scenes to make a deploy go live

We poll the container for published ports and the moment we see an open port we add a record to an etcd which stores the routing state. We then send a command to the client that we published a port, which will react by opening an iframe. Then the iframe or any request to the published url will hit our outer reverse proxy which will query etcd to find the container and if the container is alive we will send the traffic to the relevant container manager which has another reverse proxy which sends the traffic to the container.

If the container however is dead (from idling or because of an error) we revive via picking a container out of one the pools and going through the initialization phase described above.

Finally, we also host our own docker image registery so that we can push new images, whether new languages, new versions or what have you.

There is a lot more to talk about here so I or someone on the team will write a post soon.


Thanks! This is great. Definitely looking forward to a post.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: