This is shelling out to nix-shell (https://github.com/jetpack-io/devbox/blob/97c19c370287e203bb...) which means that it will only support bash AFAIK. I've seen a lot of discussions around using nix-shell for dev environments, but read somewhere that is not really made for this purpose originally, rather for just building packages, bash being only one of the limitations.
I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container. If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project, then I'm not sure nix-shell would be able to help, but I would be happy to learn there is an option to do otherwise.
I have been avoiding nix for a while since I had a bad time with it several years ago, however recently used it for a small Haskell/Latex environment for literal programming and found it worked really well. I’m probably going to invest time into learning it now, as the online docs seem to have gotten better.
It sounds like the improved docs have made a difference already, and that's great to hear. Could you name anything else that you think made your more recent experience so much better than your prior one?
External blog posts have helped, it’s easier to Google ‘nix-shell for <foo>’ and copy working code into your local environment. Honestly I couldn’t write hello world in Nix, but I can scrap together bits that work now and hopefully (according to the Nix ethos) will always work.
This. One of the reasons I use docker dev environments is to keep all my sensitive stuff separate from dangerous bugs and malicious external packages/modules in my projects.
I run nix in a vm all the time. The point of this project seems to be to avoid this approach and use local dev natively, which would be a godsend. I am pointing out that there are a bunch of things that are not really supported or ideal with the chosen nix-shell based approach.
Even with old `nix-shell`, you could always use `nix-shell ... --command fish` or whatever it is to launch a different shell. Historically other `nix-shell` wrappers, like `direnv`' Nix integration, have also supported non-bash shells well.
> [I] read somewhere that [nix-shell]'s not really made for this purpose originally, rather for just building packages, bash being only one of the limitations.
Yeah, nix-shell was originally made for debugging Nix builds. The first capability it gained was setting up the build environment of a given package, which equips you with the same compiler, linker configuration, etc., as the package would get if you ran nix-build, or when the package is built on CI/CD or whatever. It even loads the bash functions that the build system invokes so that you can experiment with manually adding steps before and after them and things like that.
But it's gained other capabilities since then, like `nix-shell -p`, whose purpose is a more general try-before-you-buy CLI and magic shebangs. It also has two descendants, `nix shell` which is just about letting you set up software, and `nix develop` which is more oriented toward setting up whole development environments and all the env vars associated with it. Anyway I think that's mostly trivia; it doesn't pose any problems for devbox afaict.
> I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container
That's true, and that's really the beauty of it: you can set up complex toolchains and use them as if they were simply part of your normal system, but without worrying that they unwittingly depend on parts of your base system that may be unique to you. Likewise, they don't require any permanent changes to your normal, global environment at all. If you've used Python before, you can think of nix-shell like a generalized venv.
> If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project
Nix can provide sandboxing for builds, for proper packages. So if you want to make sure your environment is complete, adding to your Nix shell development environment to make a complete package may help you.
But the purpose of shells like this isn't to protect you from running `rm -rf /`, if that's what you're after. It doesn't protect you from dogecoin miners in your `npm install` hooks, if you're just using Nix to provide `nodejs` and then running `npm install` as usual.
What something like this does do is allow you to use all that software without installing it. So if you open up a new session, none of that stuff will be loaded.
Nix can generate container images and VMs for you, though, and that is also one of the things `devbox` can do, if your isolation concerns have more to do with security (disallowing access to your system) than 'purity' (disallowing dependency on your system, not installing things to your system).
This all makes sense to me, thanks for the comment. I am aware of the --command option, but I didn't manage to do everything I wanted with it, but honestly that was a while ago. I was discouraged by people telling me that this wasn't the "right way" because tons of things in nix-shell assume bash, but honestly I don't know the details and I should try again.
> But the purpose of shells like this isn't to protect you from running `rm -rf /`, if that's what you're after. It doesn't protect you from dogecoin miners in your `npm install` hooks, if you're just using Nix to provide `nodejs` and then running `npm install` as usual.
This is absolutely fair. I was mostly saying what I wish I could have: isolation (as in can't write outside of the current directory) together with the ease of getting packaged without installing them that nix-shell provides, without the overhead of docker or a vm. I don't think it's impossible to build although I appreciate that it may be out of scope for this particular project.
Do you have a more real world example of a darwin-configuration.nix? I want to see how this looks like and maybe hear about experiences with it longer term.
i would just link you mine but this is a pseudonymous account; however:
most nix-darwin users i know have a darwin-configuration.nix that's nearly identical to the one the installer plops down for you, with the exception of more items under `environment.systemPackages`, and using nix-darwin solely as a declarative alternative to nix-env is totally viable
other than a few built-in service configurations and plist defaults, darwin-configuration.nix typically grows much as configuration.nix does on nixos. define or modify a couple packages here and there, shove them into `pkgs` by setting `nixpkgs.overlays`, etc etc. the ux is intentionally very similar to that of nixos
what this means is you can look at a lot of people's nixos config repos and get some idea of what you can do just as well with nix-darwin
i can, however, offer you my anecdote:
nix-darwin has let me completely forget that brew and macports exist, and even let me get away without installing xcode at all -- it's perfectly competent at “getting a suite of dev tools onto a macbook”, but what really sold me on it was just how straightforward adding a new package is:
that's an average of 32 lines to go from “obscure thing that literally nobody packages” to “bona fide part of my system”, and that includes meta blocks with homepage/description/etc, because i periodically try and get some of this stuff merged into mainline nixpkgs
the language itself is a little quirky and the evaluation model of the module system is somewhat fraught with fixed-point knot-tying fuckery, but between the process of packaging being so nice and brew/macports pissing me off, i found it easy to drink enough koolaid to get to grips with those aspects
i've been using nix-darwin and for almost exactly one year, and replaced all my linux installs with nixos, and i have not looked back whatsoever
I can tell you my story: I tried for years to stay in Italy and work there (it's my home country). As a software engineer I had to work 70 hours per week with no way to get a decent job for a salary between 20-30k euros (gross). When I severely burned out, I found the strength to leave the country and found a completely different work culture in Germany.
This to say: I'm not sure Italians will ever turn it around, still I sometimes think it would be nice to come back and try to change things, but it is incredibly hard given that most of the young people there either leave, they accept shitty jobs or have the same mentality of their parents.
Take now... some hard-headed business man, who has no theories, but knows how to make money. Say to him:
"Here is a little village; in ten years it will be a great city; in ten years the railroad will have taken
the place of the stage coach, the electric light of the candle; it will abound with all the machinery and
improvements that so enormously multiply the effective power of labor. Will in ten years, interest be any
higher?" He will tell you, "No!" "Will the wages of the common labor be any higher...?" He will tell you,
"No the wages of common labor will not be any higher..." "What, then, will be higher?" "Rent, the
value of land. Go, get yourself a piece of ground, and hold possession." And if, under such circumstances, you
take his advice, you need do nothing more. You may sit down and smoke your pipe; you may lie around like
the lazzaroni of Naples or the leperos of Mexico; you may go up in a balloon or down a hole in the ground;
and without doing one stroke of work, without adding one iota of wealth to the community, in ten years you
will be rich! In the new city you may have a luxurious mansion, but among its public buildings will be
an almshouse.
I would add: "Or get some intellectual property everyone must buy." /s
Oh, I am neither against IP, nor in favor of buy-out of all natural resources. I just think it would be good if each IP had reasonable price [0] and everyone had a land for subsistence.
Disclaimer: this is all AWS related as this is the cloud I'm using. I haven't tried Google Cloud Functions or the Azure equivalent.
I've been working with Lambda a lot more lately and it is not so bad... but also not great.
I'm saying this cause I found it hard to have a git-first (or git ops) workflow that is good in AWS: it looks like everything is made to be changed manually. CloudFormation is a slow with some resources (if you need CloudFront it will take tens of minutes) and CodePipeline has a pretty terrible UX and user experience. CodePipeline is cheap and it works for sure, but it's not a good system for pipelines as restarting, terminating steps and getting the output of steps just don't work in a decent way (I want to see the output in the steps, not jump to CloudWatch). Pretty much every other system outside of AWS is better than that, but the integration with Lambda and APIGateway is not as good unfortunately. If you know of a better system for CI/CD with AWS Lambda outside of CodePipeline, I'd be interested to try it.
In a similar way, most of the serverless frameworks I've tried are written for a workflow that is executed from CLI which is great to start and attractive for developers but not good enough for a company that aims at full reproducibility of setups and "hands off" operations. Source code change should trigger changes in the Lambda/API Gateway setup all the time and it would be great if devs don't have to trigger changes manually.
Apart from those steps, I think Lambda is definitely promising and I see the company I'm working for right now using it more and more. The developer experience is still lacking IMO but I'm confident we'll get there at some point.
But does it still make sense to have conferences so big? re:invent is so big it is extremely hard to actually attend sessions and there are logistics problems like sessions being in different far away locations. Small conferences seems usually more useful to actually connect with people that those kind of massive events.
It makes sense for the companies involved. If someone is attending primarily just to go to sessions, then no. (Although if you're just going for the sessions, I'm not sure most conferences really make sense.)
Those kind of articles are not so useful, Mesosphere is the company behind DC/OS and Mesos and they have all the interest in the world to say that Mesos is the best. Things like "... are willing to get your hands dirty integrating your solution with the underlying infrastructure" when talking about Kubernetes is unfair, especially if you compare Kubernetes to Mesos and not to Mesosphere DC/OS for which they provide paid services.
It is true that Mesos works on a different level, but, most of all, the two level scheduling is just a different take at the problem of abstracting physical/virtual resources. In the end, both Mesos/Marathon and Kubernetes aim at the same goal: allow developers to stop thinking about servers.
Kubernetes' great advantages is the community (which is unbelievable) and the extensibility it proposes: Third Party Resource or Custom Resource Definition, pluggable webhooks in the API Server and a number of other things that are simply not there in Marathon or any competitor which allow companies to make Kubernetes work best for their use cases.
The limitation of only being able to run Containers I think will be fleeting — as docker and alternatives mature, it really won't make sense to ever use anything else when trying to get the scale that Kubernetes and Mesos is going for by abstracting out the underlying hardware and providing a framework to run sophisticated apps on undifferentiated hardware.
What's described in that doc is even easier today thanks to Operators (https://coreos.com/operators), which, to quote the description page, are "application-specific controller[s] that extend[s] the Kubernetes API to create, configure and manage instances of complex stateful applications on behalf of a Kubernetes user."
Disclosure: I work on Kubernetes at Google (and wrote the doc you linked to).
Some of us love getting our hands dirty. In my honest opinion this whole article seemed very neutral in phrasing. It wasn't until I decided to check the source after having read and enjoyed the entire article, that I discovered that it was written in a Mesos domain. And even then I applaud that they were humble enough to save their own product until the end and didn't seem to exaggerate their sales pitch.
True, it is not too much of a sales pitch, but still something that really can't be seen as impartial, for natural reasons (it comes from the company behind Mesos) and it looses a bit of clarity (Java doesn't equal legacy, you can run Stateful workloads on Kubernetes) towards the end.
From my point of view, the benefits of the two level scheduling are actually quite limited with respect to how the whole story is usually told. Some Mesos framework always use all the resources from the clusters and it might get tricky to really have multiple frameworks to run at the same time on Mesos. Also, sometimes those frameworks don't really offer so many additional features to justify changing the way you are already using Spark, Cassandra and so on.
Exactly. The frameworks that the article describes know how to run say Cassandra or Spark, but you can do the same thing in Kubernetes using TPRs or CRDs with operators:
Well for something as crucial as this you want to be able to debug and patch the code. I mean: using outside of hobby, prototyping or a startup (i.e the rest of the economy were money wants to buy off risk) this stuff cant be a magic black box. That would be so irresponsible. "Why are our servers down? We are loosing 500k per day." .. "we are getting some weird error we cant debug because we dont know the codebase, or worse, the language. But we searching for random dumb stuff to try on StackOverflow". Yeah somebody is getting fired.
So for the developers responsible for the uptime it could matter a lot if they are more comfortable with one of codebase or language they use.
Off course if you are more like a consumer app type of startup, just put all energy on UX and stuff and hope for the best. But lets hope every bank that uses this sort of stuff has a developer on call that can actually dive into the codebase.
Is there a way to do this properly in AWS without nginx? It would also be great to have features to switch only percentage of the traffic to an app when doing blue/green deployments.
The best way to do blue/green is put them behind a single endpoint. So, if your endpoint is 'app: node-app', then you put the label 'app: node-app' on BOTH your existing version, and your future version, and target all traffic to a service with the selector 'app: node-app'.
Then, you slowly start to spin up your new instances from 1 -> 10 -> 100 (or whatever). The traffic will split automatically because both apps have the same label/selector, and you control the amount by how many instances of each you have.
Sort of a blend. It will send a sigterm to your process first which should be your signal to start draining and exit when they are done. If you don't finish within a configurable timeout then sigkill is sent.
I was wondering how to make this automated through some sort of pipeline that needs a human to click "go on with the next X % of the rollout and how I would do it with kubectl without too much pain.
We've been doing this with multiple deployments (e.g. 1, half, all) and updating the deployments sequentially when the previous one looked good. (these are all fronted by one service).
Take a look at Traefik (http://traefik.io/), it's a reverse proxy you use as an edge service behind the cloud providers L4/L7 LB. It is designed to change dynamically and can listen to K8s ingress changes and reconfigure itself automatically and it has let's encrypt support (although at the moment not so streamlined in k8s but that is supposed to change soon).
Great stuff, happy to see this, especially the part that concerns the setup of the cluster, but still too much in early stages. Currently I'm learning a lot on how to get started on AWS and it is still a bit too painful... after using GKE you just don't want to deal with the manual setup.
Also don't forget about minikube, which allows you to play around with a kubernetes cluster locally. You should really be working with this before even attempting to setup a cluster in the cloud.
You should check out kops. It seems to me like its one level of abstraction above kubeadm, and makes creating clusters on AWS _ridiculously_ easy (one command).
I'd like to try out kops, it sounded good. But step one was setting up a zone on route53, we don't use route53. Kube-aws from the coreos folks makes the same assumption, but is still useable, however it doesn't have multi-az capability.
I love kubernetes and I can't wait for the tooling to mature. I'm about to give 1.4 a spin and see where things stand.
That's only if you use VPC networking. If you use external networking, you can deploy a networking Daemonset (Weave, Flannel, etc.) and there's no 50 node limit.
I hope the future of cloud will really be managed OSS as service. Google is doing a great job with Kubernetes and GKE and I hope the other providers will understand that. Microsoft is on the right way with DCOS as service, Amazon is just not there yet.
I tried to experiment myself with nix-shell, but I think it doesn't provide separation on the machine on which you run, it's not a chroot nor a docker container. If you are interested in some level of separation to make sure that a dev environment is "safe" to run on your machine without making side effects to things external to the project, then I'm not sure nix-shell would be able to help, but I would be happy to learn there is an option to do otherwise.