Hacker News new | past | comments | ask | show | jobs | submit login
Docker for Mac and Windows Beta (docker.com)
904 points by ah3rz on March 24, 2016 | hide | past | favorite | 239 comments



The last time I used xhyve, it kernel panic'ed my mac. Researching this on the xhyve github account [1] showed that it was determined that it's due to a bug with Virtualbox. That is, if you've started a virtual machine since your last reboot with Virtualbox, subsequent starts of xhyve panic.

So, buyer beware, especially if said buyer also uses tools like Vagrant.

[1] https://github.com/mist64/xhyve/issues/5

I've said before that I think the Docker devs have been iterating too fast, favoring features over stability. This development doesn't ease my mind on that point.

EDIT: I'd appreciate feedback on downvotes. Has the issue been addressed, but not reflected in the tickets? Has Docker made changes to xhyve to address the kernel panics?


Thanks, this is useful feedback. There are various workarounds in the app to prevent such things, but the purpose of the beta program is to ensure that we catch all the weird permutations that happen when using hardware virt (e.g. the Android emulator).

If anyone sees any host panics ever, we'd like to know about it (beta-feedback@docker.com) and fix it in Docker for Mac and Windows. Fixes range from hypervisor patches to simply doing launch-time detection of CPU state and refusing to run if a dangerous system condition exists.


It's good to see that you folks are taking ownership of such a critical portion of this infrastructure. I hope you understand why people can get worried when Docker has a history of integrating with third party software, and responding "Not Our Problem" when problems arise.


Who downvoted this? This is a real experience report, expressing valid concerns, citing an issue tracker for more information.

Is this type of comment discouraged on HN? If so, why?


I didn't downvote, but I'd imagine people don't agree with his criticism of stability because it conflicts with VirtualBox. VirtualBox has to invasively modify your system configuration in order to accomplish virtualization. On the other hand, xhyve is using an OS X sanctioned virtualization technique (hypervisor.framework) that works within sandboxed apps. This is the route going forward that Apple advocates for virtualization, not the method that VirtualBox uses.


> people don't agree with his criticism of stability because it conflicts with VirtualBox

Folks are welcome to disagree, but Docker has a history of shipping software which uses a 3rd party feature which breaks, to which they frequently responded "not our code, talk to someone else": btrfs instability, corrupted volumes due to conflicting devmapper libraries, iptables dropping routes, upgrades orphaning containers, etc.

I realize they don't have control over all of the variables, but constantly releasing unstable 3rd party features was not the greatest behavior, and the "Not My Problem" response to issues is aggravating.

All that said, since they're working against their own fork of xhyve, it is a sign that these kinds of issues will be addressed by the Docker team this time, which is a good thing.


This is exactly correct. We're really enjoying working with the Hypervisor.framework, VMnet.framework, and all the various hooks Apple has exposed for apps like Docker for Mac. There are some bugs in the short-term, but Apple has been steadily addressing our Radar bugs and we have workarounds in place in the Application for the most annoying ones.


I don't think that makes it makes his question any less valid. Yosemite has been out 18 months, true, but hypervisor.framework has had a lot of work done - even Docker acknowledges that they've been filing several Radar bugs on it even to this day.

Yes, any new virtualization projects should strongly consider using it, but that doesn't mean any issues with projects with have a massive investment in tooling and ecosystem should be considered deprecated and dead just because of this.


I'd bet it has to do with criticizing the Docker developers for a bug which is actually in one of two independent projects, and doing so in reaction to an announcement of something explicitly billed as a way to avoid that problem.

I doubt it would be getting downvotes if the comment was just a statement of fact without the somewhat random slam against the Docker developers.


Yes, I think this issue has been addressed for a while – it was solved in a release of Virtualbox. I'm sure that 5.0+ doesn't have the conflict with xhyve.


vbox4 caused issues with xhyve when xhyve first came out (not sure if it's still an issue), vbox5 coexists just fine.


I've had similar experiences as well as times where xhyve made it so my laptop could not come out of sleep.

xhyve is wonderful but still needs some work and it seems like the main dev isn't interested in continuing work on it at this point[1]. Hopefully Docker's usage will spur more work on xhyve.

[1] Last commit on xhyve is December 28th, 2015 https://github.com/mist64/xhyve/commits/master


We've had to fork xhyve very heavily for Docker to embed it, and are making many changes in a very rapid loop as we improve d4mac. We're still debating whether to contribute back into xhyve or just make it a separate open-source project with its own pace and design tradeoffs. Either way we are impatient to open-source it.


I'd hope some of your improvements make it back to xhyve


What's a little mind-boggling to me (I have some, not the best, but some experience with the Docker devs) is that the mantra for a long time has been "We're not going to add features to Docker if that feature could wrap Docker instead."

Makes sense, that's fine. But, it seems they've gone (and continue) to do that very thing!


I can easily get kernel panic from my Mac using the docker-machine with VirtualBox driver. It just happens so easy if you're not on latest VirtualBox or even possibly on the latest version. It happens almost half of the time to me.


I think this was related with a kernel bug which was fixed with the latest release from OSX (released yesterday).

I personally discovered this bug while using dlite, which uses xhyve behind the scenes.


I too am seeing it as fixed now.


Updating to a newer version of VirtualBox solved that issue.


If I had a yearly quota on HN for upvotes, I'd use all of them on this.

> Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for “in container” development.

This (filesystem notifications) was one of the major drawbacks for using Docker on Mac for development and a long time prayer to development god before sleep. I managed to get it working with Dinghy (https://github.com/codekitchen/dinghy) but it still felt like a hack.


We'd love to get your feedback on the new filesystem engine in the Docker for Mac app. It's been a ton of work to get right, and there a few corner cases in the current beta that we're squashing, but overall things "just work" for my day-to-day Linux development on my Mac using the current beta.

At this stage, pointing it to the weirdest and most wonderful filesystem stressers you can find is welcome. We'll leap on any issues you find...


One thing I'm immediately concerned about is having some way of "pausing" xhyve. Purely because of Android development :(

Intel's HAXM doesn't (seem to?) play nice, and asks for an exclusive lock. See https://github.com/mist64/xhyve/issues/88 and https://code.google.com/p/android/issues/detail?id=197915


You can quit and restart (its very quick). There isn't a pause at present though.


That's probably good enough


I haven't tried the Android emulator recently, but I am interested in deploying Facebook's Infer tool on our codebase (and they've got a Docker container too for it of course). So I've filed an internal bug for us to look into HAXM and figure out if it plays well with Docker for Mac/Windows. Thanks for the pointer!


What is the filesystem engine? Is it just something fancy on top of NFS, or something completely new?


It's something new.


Is it related to the work started by Brad Fitzpatrick a few months ago with the goal of implementing client-server gateway between the host and the guest filesystem using FUSE?


The new daemon (dubbed osxfs) is FUSE-based at the moment, but also provides a semantic translation layer between OSX filesystem calls and Linux kernel events. The FUSE layer can be removed in the future in favour of a direct kernel module with this architecture, if it ends up being a bottleneck (its fine right now though)


Do you use a custom protocol between the daemon running in OSX and the one running in the container?

Is there some kind of caching? If yes, what is the impact on tools like make when checking atime or mtime? If no, is there a perceptible impact on latency, for example when compiling a large project in the container?


I gotta say I've had about all the VirtualBox I can take in this lifetime. It's caused me pretty dire file handling problems on 3 projects and only 2 of those had any Docker in them. Thanks for working on this.

I have an open bug against docker-compose (docker doesn't do the same thing by itself) where the wrong layers are being used, but only on virtualbox.

Hopefully this will solve that problem, as well as how to make my dev and prod database handling more homogenous. And I can finally turn sendfile back on in my nginx configs without having to special case anything for dev!


Is this using the normal VirtualBox "shared folders" functionality? For Vagrant we had to drop VirtualBox in favour of VMware Fusion because VirtualBox suffered cache corruption almost every day. You would write a file on the host, and the file would be corrupt inside the VM. Last I checked, this bug was still open, but I'm not certain (on my phone right now), but it still makes me wary of using VirtualBox again. Have you dealt with this issue at all?

Edit: Or is this not using any VirtualBox code at all?


No it does not use any virtualbox code.


I've been running Docker on Hyper-V since the start (VBox never worked for me as I had a requirement on Hyper-V) - can you provide a little more info on how unikernel experience was used to make a difference? Is this a custom compiled linux kernel with Alpine distro on top? How would volume mounts compare to something like using the netshare (cifs) with a samba share from the windows host? (which is what I'm currently using)?


I really appreciate you guys working on that. I have since moved to a debian VM, but might eventually move back if I don't need to frequently restart docker machine hosts.


Will do, I've signed up for the beta (same username as here)


What is the performance of this new filesystem engine, compared to VirtualBox shared folders, which are well-known to be slow?


This is huge! We had a lot of trouble getting fs notifications working in containers, especially for hot-reloading of code.

Our solution was to create a custom fs watcher that would look for changes in the content of the files (it's only code, and for development, so speed doesn't matter much). I have been looking to replace this with something cleaner (at least with something like Dinghy and actual filesystem events), but Docker for Mac is likely the way we'll go.


Anywhere I see Docker on Mac mentioned, I always point people to Dinghy. It has solved a lot of the issues the vanilla solutions had (NFS, fsevents etc) and the maintainer is incredibly responsive.

I would love to see Docker come up with something that rivals Dinghy as I would prefer not to have to use a third party tool, but considering their reputation for stability and the comments being made about xhyve, I am happy to continue using (and gushing about) Dinghy for the foreseeable future.


You should check out Dlite (which is an alternative to Dinghy) has been around for a few months: https://github.com/nlf/dlite

It's been a much more friendly (read: plugin and play) than dinghy.


As mentioned in another comment here [1], we've been in touch with the author of dlite during this beta. His feedback was very useful!

[1] https://news.ycombinator.com/item?id=11352621


Does it mean no `VBox shares` mmap bug limiting the use of volumes in Cassandra and MongoDB containers?


Yes that should work fine with osxfs (the new filesystem engine). Do you have a pointer to the specific bug in vbox so we can add it to our test suite?


Here [1], under "WARNING (Windows & OS X)", there're several links to bug reports on VBox and Mongo trackers.

[1] https://hub.docker.com/_/mongo


Here is the catch: you just need to realise that docker-machine is really just a virtual box linux machine. Once you get that, having shared volumes working as expected is quite easy without requiring 3rd party tools. I'm actually quite concerned on how this is going to change with this new tools, if they introduced "more magic" it could mean to lose control on your local environments.


In my comment I wanted to express the same, the whole Boot2docker solution seem too hacky for non Docker enthusiastic devs. I had hard time trying to convincing co-workers to use docker for development environment, mainly because filesystem, and the user experience. You have to really like docker to go thru all the setup. I did it but I can see why my coworkers didn't.


Agreed. We gave up on it and created a local beefy server that we shared for docker testing, but then we have port collisions and the other problems that come along with sharing.

Native is better and I'm very excited for this.


Can someone explain in simple terms how Docker for Windows is different from Application Virtualization products like VMware ThinApp, Microsoft App-V, Spoon, Cameyo, etc? Also, why does it require Hyper-V activated in Windows 10? I found this: https://docs.docker.com/machine/overview/ but I don't understand if you need separate VMs for separate configurations or they have a containerization technology where you are able to run isolated applications on the same computer.


Thanks for exposing me to ThinApp and the rest. I took a quick look, these are Microsoft based technologies designed to run Windows apps, however conceptually I don't see much difference.

Docker is a containerization standard that relies on various Linux capabilities to isolate application runtimes (or containers if you will). On Mac and Linux it used to be achieved by running a small Linux VM in VirtualBox, but looks like this release has brought xhyve on, which is supposed to have an even smaller foot print.

HTH.


ThinApp is really about packaging existing Windows desktop apps with the appropriate OS bits that the application needs. Then you can run that "thinapped app" on a different version of Windows or from a USB drive. Considered app level virtualization. #1 use case was to run IE6 based apps on newer Windows OS.


When I search for xhyve it returns a project for OS X, it would be interesting to know the specific Windows technologies also.


As mentioned in the linked blog post, they're using Hyper-V as their hypervisor on Windows (it's a direct counterpart to xhyve on OSX, except entirely built into the OS).

The Docker for Mac and Windows beta does not use VirtualBox on either platform.


> it's a direct counterpart to xhyve on OSX, except entirely built into the OS

xhyve on Mac OS X is a very thin wrapper around Hypervisor.framework which itself is "entirely built into the OS".


My original question was if you need to run a separate VM for every Docker configuration or you can run different application versions at the same time. For example, running Outlook 2010, 2013, and 2016 side by side.


You can run multiple containers on a single host (VM).


Docker for Windows relies on Hyper-V


Docker uses LXC containers. In Linux, these aren't VMs and are light weight user-land separations that use things like cgroups and lots of really special kernel modules for security.

Unfortunately, this means Docker only runs on Linux .. not even Linux...special Docker Kernel Linux (all the features they need are in the stock Kernel tree, but it's still a lot of modules). In Windows/Mac, you still need to run in a virtual machine.

Even with this update...you still need to run in a virtual machine. It's not actually running Docker natively. It can't, even on Mac which has a (not really) *NIX-sh base. You have to then use the docker0 network interface to connect to all your docker containers.

In Linux, you can just go to localhost. I _think_ FreeBSD has native Docker support with some custom kernel modules. I'm not sure...I've only looked at the Readme. I haven't tried it.

So even in Windows/Mac, all your containers do run in one VM (where as with traditional stuff you mentioned, you'd need a VM for each thing). Docker containers are meant to handle one application (that it runs as root within its container as the init process ... cause wtf?). With VMs, you'd typically want some type of configuration management (Puppet, Ansible, Chef, etc.) that sets up apps on each VM/server. With Docker, each app should be its own container and you link the containers together using things like Docker compose or running them on CoreOS or Mesos.

In my work with Docker, I'm not sure how I feel. LXC containers have had a lot of security issues. Right now, Docker doesn't have any blaring security holes and LXC has increased security quite a bit. CoreOS is pretty neat and I wouldn't use docker in production without it or another container manager (the docker command by itself still cannot prune unused images. After a while you get a shit ton of images that just waste space you're not using. CoreOS prunes these at regular intervals. A docker command to do this is still a Github issue. Writing one yourself with docker-py is horribly difficult because of image dependencies).

Oh and images. Docker uses images to build things up like building blocks. That's a whole thing I don't want to go into, but look it up. It's actually kind of interesting and allows for base image updates to fix security issues (although you still need to rebuild your containers against the new images ... I think...I haven't looked into that yet).

Docker is ... interesting. I find it lazy in some ways. I think it's better to build packages (rpms, debs). FPM makes this really easy now. Combine packages with a configuration management solution (haha..yea they all suck. Puppet, Ansible, CFEngine...they're different levels of horrible. Ansible so far has pissed me off the least) and you can have a pretty solid deployment system. In this sense, Docker does kinda make more sense than handling packages. You throw your containers on CoreOS/Mesos and use Consul for environment variables and you can have a pretty smooth system.

I dunno. I'm trying to actually like Docker. I've only made fun of it in the past, but now I work for a shop that uses it in production. O_o

:-P


There are no custom kernel modules, everything is in a stock kernel since 3.10 (which not-so-coincidentally is the minimum supported kernel version).

Containers are run with whatever user you tell it to run as, the default is root because that's the only guarantee.

LXC is also something different. LXC is a set of userland tooling to interact with cgroups and namespaces (which docker used to exec out to). LXC != Linux containers (and indeed there isn't really such a thing as a container like there is a zone or a jail on Solaris and BSD respectively, it's made up) Also again, no custom kernel modules on BSD.


Docker on Windows (with native Windows containers) is also a very real thing and will ship with the next Windows Server release (you can download a technical preview from Microsoft now).


The glaring security hole in Docker is that it has not designed a solution for keeping secret data necessary to build an image from being in the image at run time.

They also haven't solved the general case of keeping transient build data out of the final image either, but that's a broader problem that doesn't necessarily involve security concerns.

For now not a lot of people are concerned about either problem so it's not getting the attention it deserves. But they've been steadily peppered with inquiries about these issues for a year or two now and they still don't have an answer, which is concerning. I believe this is one of the reasons the CoreOS guys wandered off to do their own thing.

Fortunately for us and unfortunately for them, they have the design aesthetics of the Marquis de Sade, and until they start giving even half a thought to ergonomics, Docker is perfectly safe.


They have build args for this in now. Thus, you'd do something like:

docker build --build-arg OAUTH_TOKEN=blah -t example .


I think you just proved my point. We're all of us running around with our pants down because we think Docker is taking care of this stuff but it's merely a bunch of features that look like they should be fit for that purpose but aren't.

And this is why I am stuck with a separate build and package phase, because I have to have that separation between the data available at build time and what ends up shipped, but even there I'm pretty sure I'm making mistakes, due to some of the design decisions Docker made thinking they were helping but actually made things worse.

For instance, there's no really solid mechanism for guaranteeing that none of your secret files end up in your docker image, because they decided that symlinks were forbidden. So I have to maintain a .dockerignore file and I can never really be sure from one build to the next that I haven't screwed it up somehow. Which I will, sooner or later.

I'm always one bad merge away from having to revoke my signing keys. It's a backlash waiting to happen.


I don't see how that's a compelling argument at all.

All that's keeping you from committing your credentials is a .gitignore file. They have the file, it works reliably, don't worry about it.


You should know there was a pretty big bug fixed in .dockerignore in just the last release. [edit] That bug was in the logic for white-listing files, which is generally the safest way to keep from accidentally publishing things (that is, if it works).

And it's possible a similar issue still exists in docker-compose but it's still open.

.gitignore keeps me from checking my files into git, but it doesn't keep me from publishing them in a docker image. So now I have a second way to screw up.


Can you link to this bug? I thought .dockerignore specifically didn't allow whitelisting and only allowed for blacklisting files that weren't to be included.

Are you saying that docker would include files that should have been excluded by .dockerignore? I'd be interested to learn more. Thanks in advance.


You could probably whitelist with a .dockerignore like

    *           # exclude everything
    !README.md  # include the README.
    !run.sh     # include the initiation script
You would want to check exactly what the globbing rules are for the .dockerignore file, though. I don't know whether '*' will catch .dotfiles, for instance.

  https://docs.docker.com/engine/reference/builder/#dockerignore-file
  https://golang.org/pkg/path/filepath/#Match


That's it, thanks for following up in my absence.

There are a couple of frameworks where all of the production files end up in, for instance /dist and one other directory. Rather than having to constantly blacklist everything you just say "ignore everything except X and Y"


I'm sorry, things got hectic and I bailed on the discussion. I thought I had a handy link to the bug I was thinking of, but I couldn't find a back-link from the issue I'm watching to the one in docker/docker.

I think but am not 100% certain this is the issue I was thinking of, but it seems the most likely, and it was just fixed in 1.10: https://github.com/docker/docker/issues/17911

Some day I'm sure .dockerignore will be solid, but my confidence level isn't high enough yet (it's getting there) to base my trust on.

My point was that there are other ways that directory structures and what is visible to COPY could have played out where vigilance is less of a problem. It's usually immediately obvious if a file you actually needed is missing from a build, but less obvious that a file that you categorically did NOT want to be there is absent.

Because the system runs in one of those scenarios and dies conspicuously in the other.


From the horse:

  The build-time environment variables were not designed to handle secrets. 
  By lack of other options, people are planning to use them for this. 
  To prevent giving the impression that they are suitable for secrets, 
  it's been decided to deliberately not encrypt those variables in the process.


How would they "encrypt" them that wouldn't be trivial to undo?

I think people aren't concerned about it because it doesn't make sense to try to put secrets into container images. Whatever you're using to deploy your Docker containers should make those secrets available to the appropriate instances at runtime. This is how Kubernetes handles secrets and provides them.

http://kubernetes.io/docs/user-guide/secrets/

(For example, what if you have two instances of a service and they need to have different SSL certs? Are you going to maintain two different containers that have different certs? Or would you have a generic container and mount the appropriate SSL cert as a volume at runtime?)


I've actually read that. For context, it's a comment made before the feature was complete. Said feature, according to the manual, doesn't persist the value, thus is probably suitable to pass a build time secret.

From my testing though, as long as you set the build-arg and consume it directly, it doesn't seem to persist. That said, it's super easy to fuck that up if the tool you consume it with then goes on to save the secret somewhere.

Thus it's no doubt best to use expiring tokens or keep your build seperate. Also don't use it to seed a runtime secret unless you treat, that'd force you to treat the image as a secret itself.


I linked to that because it cross references to the PR where the build-args feature was added. If they're out of sync that's 1) news to me and 2) confusing and should be fixed.

I think one of the things we're seeing is that Docker is opinionated, a number of powerful dev tools and frameworks are also opinionated, and us poor developers are stuck between a rock and a hard place when those opinions differ.

For instance I'm still not clear how you'd use the docker-compose 'scale' argument with nginx. Nginx needs to know what its upstreams are, and there's IIRC still an open issue about docker-compose renumbering links for no good reason, and some Docker employee offering up how that's a feature not a bug. I could punch him.

Single use auth tokens and temporary keys sure would fix quite a few things, to be certain, but those opinions keep coming in and messing up good plans :/


I'm not sure if we should be really be having a go at them for whats on their git discussions verses whats in their documentation. I'd presume the documentation is canonical, I'd rather they weren't muting their discussions to remain consistent.

That said, as I said previously --build-args are dangerous, it's trivially easy to store then publish a secret, so it makes sense they weren't jumping for joy about implementing it. I'd say it is needed though, thus its now a thing.


The two most recent technical previews for Windows Server support containers natively. You don't need a VM to run containers on Windows.


It supports Windows containers. You still need a Linux VM to run Linux containers.


Docker does not use LXC. It's a separate project. LXC is similar, but has gone in a different direction.


> Docker uses LXC containers.

Nope, we've been using our own implementation of a container runtime for 2 years (libcontainer). LXC is not supported anymore and it was always a hacky execdriver.

> In Linux, these aren't VMs and are light weight user-land separations that use things like cgroups and lots of really special kernel modules for security.

They're kernel-space separations since the kernel understands namespaces (though it doesn't understand the concept of a container and some things aren't namespacrd).

> Unfortunately, this means Docker only runs on Linux .. not even Linux...special Docker Kernel Linux (all the features they need are in the stock Kernel tree, but it's still a lot of modules).

Almost all modern distros have support for all of the modules required to run Docker.

> In Linux, you can just go to localhost. I _think_ FreeBSD has native Docker support with some custom kernel modules. I'm not sure...I've only looked at the Readme. I haven't tried it.

FreeBSD is not supported as a daemon.

> So even in Windows/Mac, all your containers do run in one VM (where as with traditional stuff you mentioned, you'd need a VM for each thing).

Actually, recent versions of Docker can run as a daemon on Windows using some proprietary features I don't care about.

> Docker containers are meant to handle one application (that it runs as root within its container as the init process ... cause wtf?).

All machines have a single process running as root as the init. You can run a proper init inside your container (in fact it's recommended), and run many processes inside the same container. It's discouraged for scalability reasons to stuff your database and front-end in the same container because then it's hard to spin up more than one front-end connected to the same backend.

> In my work with Docker, I'm not sure how I feel. LXC containers have had a lot of security issues. Right now, Docker doesn't have any blaring security holes and LXC has increased security quite a bit.

Again: Docker doesn't and hasn't used LXC for quite a while. In addition, Docker has default selinux, seccomp and apparmour profiles that increase the security (seccomp allows us to disable syscalls that arent namespaced). There is a concern on the kernel side that they don't appear to care about going the Zones or Jails route: actually making the kernel aware about containers so that it can properly namespace things.

> After a while you get a shit ton of images that just waste space you're not using. CoreOS prunes these at regular intervals. A docker command to do this is still a Github issue. Writing one yourself with docker-py is horribly difficult because of image dependencies).

ahem % docker images | awk '/^<none>/ { print $3 }' | xargs docker rmi

Sure, it's not a single command but it isn't impossible to do and doesn't require docker-py. Besides, you should be using engine-api.

> Oh and images. Docker uses images to build things up like building blocks. That's a whole thing I don't want to go into, but look it up. It's actually kind of interesting and allows for base image updates to fix security issues (although you still need to rebuild your containers against the new images ... I think...I haven't looked into that yet).

There's also tools like zypper-docker to allow for hot-patching of images.


docker on windows requires a linux vm running on top of virtualbox.


Linux VM of course but not virtual box! Docker for Windows is built on top of Hyper-V.


This is an amazing announcement, but... The beta requires a NDA. The source code is also not available. This gives the impression that this will be a closed commercial product and that really takes the wind out of my sails.


The NDA is just residue from the alpha teating phase, we'll remove it.

We will open-source all the components individually, to make them easier to reuse elsewhere. That requires work to do properly.

Lastly, Docker for Mac and Docker for Windows will be free.


NDA is removed.


From the blog post: "Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year."


Yes, I'll be excited when the lower bits will be open sourced, as that is what I care most to see (I don't even run Mac/Windows). But in an era when even Microsoft is open sourcing and giving developer tools away for free, the idea that this won't be an open source product is odd.


We want to open-source the system components separately, so that you can use them separately from docker if you wish. That is different (and more useful) than just dropping a big xcode project on github and letting you dig for the gems. We want to do it right which takes extra work.


But not all? Any comment from the Docker folks on here? I thought docker was open source.


Yes, all the components will be open-sourced.


Why? Great products are worth paying for.


Nothing about the freedom of the software has to do with whether or not you compensate the authors for creating it.

If you take free software and never consider paying its developer for making it, despite them providing you freedom, choice, and a degree of trust in the software you can not have with proprietary code, then you are the kind of person to blame for why proprietary software is so rampant today.

For example, I donate $200 to the Document Foundation every year to match the cost of an annual subscription to Office 365 plus a 33% bonus for respecting my freedom.


As a practical matter, few people pay for open-source software. In theory, the two issues are orthogonal, but in reality, they are not.


ahem SUSE and RedHat are examples of companies that sell free software.


Expecting open source doesn't mean I wouldn't pay for it. I co-founded a company that is fully open source, so I do understand that money needs to exchange hands to keep this industry flowing. I just believe in freely sharing ideas.


> freely sharing ideas

You mean freely sharing execution? The source is not the idea.


There are no plans to charge for it though.


We have been working on hypervisor.framework for more than 6 months now, since it came out to develop our native virtualization for OS X, http://www.veertu.com As a result, we are able to distribute Veertu through the App Store. It’s the engine for “Fast” virtualization on OS X. And, we see now that docker is using it for containers. We wish that Apple would speed up the process of adding new Apis in this hypervisor.framework to support things like bridge networking, USB support, so everything can be done in a sandboxed fashion, without having to develop kernel drivers. I am sure docker folks have built their kernel drivers on top of xhyve framework.


If you're using docker on mac, you're probably not using it there for easy scaling (which was the reason docker was created back then), but for the "it just works" feeling when using your development environment. But docker introduces far too much incidental complexity compared to simply using a good package manager. A good package manager can deliver the same "it just works" feeling of docker while being far more lightweight.

I've wrote a blog post about this topic a few months ago, check it out if you're interested in a simpler way of building development environments: https://www.mpscholten.de/docker/2016/01/27/you-are-most-lik...


The point of using Docker on Mac is to ensure that your local dev environment is the same as your deploy environment, and avoid that ever-delightful "well it works on my machine, not sure what's breaking on the build server" experience.


With respect, I'm not sure you really grok the value prop of docker as it relates to operations and delivery. (The "easy scaling" part is incidental to that, and not the primary purpose of Docker). Nix, while cool, does not address those issues.


Nix is more difficult. With Docker, you don't need to learn much new stuff, since images are created from sequences of ordinary shell commands running on familiar distributions. Nix requires you to familiarize yourself with a new package manager and its somewhat arcane definition language. And then also the specific Nix tools for working with npm, ghc, or whatever you want to use. So the experience is very different.


In this case you could still pick homebrew as your package manager of choice. It works pretty well, is available cross platform (https://github.com/Linuxbrew/linuxbrew) and is far more simple than docker. You just have write a short shell script which installs all the dependencies via brew when not installed already (much like you'd do in a Dockerfile).


Is Docker that difficult? The Dockerfile is a nice single file that defines all container dependencies and then it's three commands (build, create, and start) to be able to easily work in the exact same environment for dev, qa, and production regardless of whether the host is running OSX or a flavour of linux.


Brew doesn't offer you isolation - you cannot run multiple isolated applications with different dependencies at the same time.


But you can install brew in a subdir. For some projects i do exactly that. Project/.brew. I have a project/bin/activate which puts ./.brew/bin in your path. And there you go: postgres9.5 for this project and postgres 9.2 for that project. Still no isolation though. And having pg installed in a docker image is not a good example... But you getthe point :)


How does this solve the dev-prod parity issue (using the same versions of dependencies, etc.)?


No, package managers don't really have anything to do with this.


Nix can provision containers, VMs, bare metal. It is much more capable than Docker because it composes, and doesn't use opaque disk images as the basis for everything. Nix provides much better reproducibility.


Ditto with Guix. Reproducible builds FTW.


I'd like to have an equivalent of `guix environment` in Brew.


Tell that to guix container.


> Faster and more reliable: no more VirtualBox!

I'm a Docker n00b, still don't know what it can do exactly. Can Docker replace Virtualbox? I guess only for Linux apps, and suppose it won't provide a GUI, won't run Windows to use Photoshop?!


Let me explain Docker for Mac in a little more detail [I work on this project at Docker].

Previously in order to run Linux containers on a Mac, you needed to install VirtualBox and have an embedded Linux virtual machine that would run the Docker containers from the Mac CLI. There would be a network endpoint on your Mac that pointed at the Linux VM, and the two worlds are quite separate.

Docker for Mac is a native MacOS X application that embeds a hypervisor (based on xhyve), a Linux distribution and filesystem and network sharing that is much more Mac native. You just drag-and-drop the Mac application to /Applications, run it, and the Docker CLI just works. The filesystem sharing maps OSX volumes seamlessly into the Linux container and remaps MacOS X UIDs into Linux ones (no more permissions problems), and the networking publishes ports to either `docker.local` or `localhost` depending on the configuration.

A lot of this only became possible in recent versions of OSX thanks to the Hypervisor.framework that has been bundled, and the hard work of mist64 who released xhyve (in turn based on bhyve in FreeBSD) that uses it. Most of the processes do not need root access and run as the user. We've also used some unikernel libaries from MirageOS to provide the filesystem and networking "semantic translation" layers between OSX and Linux. Inside the application is also the latest greatest Docker engine, and autoupdates to make it easy to keep uptodate.

Although the app only runs Linux containers at present, the Docker engine is gaining support for non-Linux containers, so expect to see updates in this space. This first beta release aims to make the use of Linux containers as happy as possible on Windows and MacOS X, so please reports any bugs or feedback to us so we can sort that out first though :)


xhyve isn't exactly production ready (and the main repo hasn't been updated for a while). Did you guys actually solve some of the major problems (e.g., https://github.com/mist64/xhyve/issues/86 - crash coming back from sleep) or is that an expected part of the beta experience?


Yes, quite a few issues of that nature have been fixed (and we are planning to open-source the changes later in the year once we stabilise the overall application).

The bug above has been reported to Apple and they've reportedly fixed it in the latest 10.11.4 seeds, but we've put in a workaround that detects ACPI sleep events and freezes vCPUs just before going into hibernate mode. None of the beta testers have reported any sleep crashes using Docker for Mac recently, so if you do see anything of this nature please let us know.


I have not experienced this crash, and even had a container running last night, put the laptop to bed, woke it up this morning and the container is still there, running and interactive. Running OSX 10.11.3


> the networking publishes ports to either `docker.local` or `localhost` depending on the configuration.

Perfect. We had to ditch Kitematic on OS X due to the lack of port forwarding, since we couldn't get OAuth redirects to work when developing locally.


"Most of the processes do not need root access" - To create the VM network interfaces the vmnet_start_interface() in pci_virtio_net_vmnet.c function needs elevated privileges... how have you managed to get around not having to run xhyve as root just to have a virtual Nic?


No, Docker doesn't replace virtualbox. Docker and VB are different tools, meant for different things. The reason the article says no more virtualbox is because previously tools for running Docker on mac, required VB to run the Docker containers but this new product does not have such a requirement. It's basically removing that heavy VB layer from using Docker containers on your mac.


You can hook into the X11 socket using Docker, but I'm not 100% certain how this could be accomplished on a OSX or Windows. You might be able to forward the socket, but I'm not nearly smart enough for that.


"the simplest way to use Docker on your laptop"

I think they forgot about Linux :)


They said simplest :)


If you purchase a laptop knowing that you will be running Linux and doing a little bit of research up front, it is every bit as simple as running a laptop with Windows or OS X.


Exactly, if you accept that you have to buy a Mac to use osX and call that simple, you must also judge Linux by buying a System 76, Entroware, Dell dev edition or a Librem laptop. Alternatively, you can also judge osX by installing it on non-Apple hardware ;)


Har har guys this was meant to be tongue-in-cheek. Let's not turn it into a flamewar. I'm a happy Linux Docker user but I'm happy to hear that things are becoming really simple and easy for Mac OS X and Windows devs, too.


Until you, say, want to print something, or run one of the bajillion pieces of useful software that aren't available for Linux.


> Until you, say, want to print something,

CUPS and Amahi work great for me. I've had more pain setting up printers on Windows and OS X (the latter started sending print jobs on each probe).

> run one of the bajillion pieces of useful software that aren't available for Linux.

Examples? I can't think of any software that I need that isn't available for GNU/Linux.


I have actually had printing fail more frequently on Windows than on Linux. Printers are quirky.


As a long time Linux user, I don't know about useful software that won't run on Linux. All the useful software I need runs just fine.

Edit: Oh, and I don't understand the comment about printing. Cups works.


cups is only one part of the puzzle; every application has to manage its own method of rendering and talking to some printing agent.

It may also work for your particular device scenario, but there are thousands of scenarios (networks, devices, etc) in which its functionality may be limited or practically non-existent.


Before purchasing any equipment, I always spend time researching how well in works with Linux. Doing this, other than the occasional bad update, I have never had a hardware compatibility issue with Linux. And I do mean never and I don't throw that word around lightly. It does limit my decisions, but there's still plenty of good hardware that just works with Linux.


Well, for that, obviously you have to also buy a printer.


Until you start to deal with things like graphics cards, and switching between integrated and gaming graphics cards for different tasks.

Or when you want a distro like arch on a laptop...


Dell Precision laptops now ship with Linux.


Very excited about this. Docker Machine and VirtualBox can be a rough experience.

> Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year.

Does this mean it is closed right now?


I found docker-machine and VirtualBox quite stable (running multiple Flask, Python, and PostgreSQL containers). The only major issue I had was from a 5 year old VirtualBox bug and sendFile. That said, I won't miss the extra steps of running eval docker-machine etc.


Interesting to see that at least one of the Mirage unikernel hackers (avsm) has been working on this.

https://news.ycombinator.com/item?id=11352594

I imagine a lot of this work will also be useful for developers wanting to test all sorts of unikernels on their Mac and Windows machines.


A lot more than just one ;)


I'm delighted to read that inotify will work with this. How's fs performance? Running elasticsearch or just about any compile process in a docker-machine-based container is fairly painful.


Our focus so far has been more on reliability, but we intend to increase performance steadily over time. There are lots of interesting optimisations we can make across the whole stack.

Please do check it out and suggest some particular benchmarks that are important to you -- we're busy building up a performance benchmark suite atm.


So, let's say if I am developing a Java EE app under windows with eclipse and want to use docker container for my app, how do I go about it?


https://github.com/mgreau/docker4dev-tennistour-app is a good example of using Java EE 7 / Angular application to show how to use Docker for Java Development

Arun Gupta wrote many excellent posts on how to use Docker to build Java apps: http://blog.arungupta.me/docker-tooling-eclipse-video/ on Eclipse tooling for Docker, http://blog.arungupta.me/deploy-wildfly-docker-eclipse/using Wildfly

https://github.com/chanezon/docker-tips/tree/master/orchestr... is an example leveraging Docker Compose and swarm for a Spring Boot application.

I hope these helps get you started using containers for development.


Thank you for the references!


I would say it depends on the deployment model of your application server. For a file based deployment you would have to mount the file system. For a socket based deployment you would have to bind the ports.

The importing thing is that WTP doesn't manage the server lifecycle.


This is v.cool, although for the Windows version it'd be great if it became possible to swap out the virtualization back-end so it's not tied to Hyper-V.

At the moment VMWare Workstation users will be a bit left out as Windows doesn't like having two hypervisors installed on the same system...


This is the issue I have too, if I want to use the new docker windows, I'd have to move my virtualbox linux vm to hyper-v and stop using vagrant.


Does anybody have any guides on setting up dev environments for code within Docker? I recall a Dockercon talk last year from Lyft about spinning up microservices locally using Docker.

We're using Vagrant for development environments, and as the number of microservices grows - the feasibility of running the production stack locally decreases. I'd be interested in learning how to spin up five to ten docker services locally on OSX for service-oriented architecture.

This product from Docker has strong potential.


I use docker, specifically docker-compose to do just that. So far it's 7 containers spread across 5 code bases all brought up with one command, `docker-compose up`.

The django quickstart guide is a good starting point for wrapping your head around it, https://docs.docker.com/compose/django/


Could you share something about how you compose containers from different code bases? This has always felt hacky to me, but maybe I'm doing it wrong.


docker-compose is kind of oriented towards what I think you are referring to, have you tried that out? I've done a bit with it and was thinking about explaining how I used it in some post but didn't know if it would be very useful outside of what I was doing. If you email me a small description I could probably whip up a docker-compose.yml as an example since it's fun to play around with.


Tried to sign up, but the enroll form at https://beta.docker.com/form is blank for me - it just says "Great! We just need a little more info:" but has no forms.


Hi folks, we had an unexpected issue while were pushing an update to site (removing the NDA requirement). It should be fixed now and you can sign up as usual. If you're using something like ghostery, you may need to pause it for this site as we using Marketo to deal with sign ups.


Sorry about that, the form is fixed now.


We are working a fix on the Marketo form right now. Sorry about that.


#2 on hacker news and no one can sign up. Bummer. I've tried 3 different browsers; Chrome, firefox, safari. Same problem. Both Firefox and Safari are completely uncustomized.


Fixed! Thank you!


We're using Marketo to track all the signups and I get a blank screen too if I use ghostery and the like.

Would you mind trying again but allowing Marketo?


I am having the same problem accessing the beta signup form. Any suggestions?


It should be back now. Sorry about that


For those using Firefox, I had to disable tracking protection to get the form to show up.


I also get a blank page with Firefox, but it does display with Chrome.


yep same for me here. yes on chrome with js and adblock disabled


Has anyone actually gotten to download the thing? I just get a we'll be in touch.


Thanks! We'll be in touch soon! => That's what I get too.


try again, and try doing a hard refresh (i needed to)


I wonder if (and hope that!) this fixes the issues[1] with (open)VPN. I can't use xhyve (or veertu) at work because of this.

[1] https://github.com/mist64/xhyve/issues/84


There is a mode that should work, which is likely to become the default soon. We do want feedback on this as it is hard to test all VPN setups.


I'm really excited to see this because I've spent the last few months experimenting with Docker to see if it's a viable alternative to Vagrant.

I work for a web agency and currently, our engineers use customized Vagrant boxes for each of the projects that they work on. But that workflow doesn't scale and it's difficult to maintain a base box and all of the per project derivatives. This is why Docker seems like a no-brainer for us.

However, it became very clear that we would have to implement our own tooling to make a similar environment. Things like resolving friendly domain names (project-foo.local or project-bar.local) and adding in a reverse proxy to have multiple projects use port 80.

Docker for Mac looks like it will solve at least the DNS issue.

Can't wait to try it out.

edit: words


I cannot wait to get home to play with this!

If I were a 12 year old girl I would be "squee-ing" right now. Ok, I'm lying - I'm a 40 year old man actively Squee-ing over this.

:)

It really plays nicely into my "weekend-project" plans to write a fully containerized architecture based in dotnet-core.



[I work at Docker on the announced Mac app]

Nathan LaFreniere (the author of dlite) is awesome, and we've been exchanging tips and tricks and areas where we can collaborate. He knew exactly where to press to find bugs in our earlier betas...


I am very excited about the new Mac app and I want to try it.

At the moment I use dlite. The thing I love about it is that it's transparent. I hope that the new Mac app has an option or mode to be like that too (start on system boot, doesn't create a new desktop window/gui, SSH from terminal would be enough for me).

Something analogous to MacVim's -v flag; by default "mvim" opens a new app with its own window, but "mvim -v" starts Vim inside current terminal. Not a great analogy, sorry about that.

Thanks.


Yes it starts on boot and doesn't need a special terminal. There is just a small whale in the toolbar so you can exit or change settings.


That's perfect, thanks!


Yes, I've been using this for some time too. It's pretty great, totally recommended to anyone who is fed up with docker-machine or docker-compose or whatever random tool is currently required.


Ditto. It sped up my Docker development wait times by like 5x compared to the virtualbox stack.


My goodness. This is some of the best news from docker this year and we are still just getting started. Packaging various hot reloading JavaScript apps will finally be possible. Gosh. I can't begin to say just how excited I am for this.


We've tried Docker for Mac with John Lees-Miller's excellent NodeJS in container development example http://jdlm.info/articles/2016/03/06/lessons-building-node-a... and it works great!


Can some Docker employee explain how are file permissions going to work on Windows? For me, that's the biggest pain (on Win).


Docker for Windows samba mounts the host filesystem into the VM so samba maps the permissions


I'm really hoping that this will be available via homebrew and not a way to force everyone to use Docker Toolbox or, god forbid, the Mac App Store.

Docker Toolbox just brings back too many nightmares from Adobe's awful Updater apps.


Biggest problem with Boot2docker was volume mounting and file permissions, hope this happens soon. > Volume mounting for your code and data: volume data access works correctly, including file change notifications (on Mac inotify now works seamlessly inside containers for volume mounted directories). This enables edit/test cycles for “in container” development


One of my pet hates about Docker was the hassle with volume mounting on the Mac and permissions. So glad this is being worked on and can't wait to try it out myself. Makes local development a pain if you can't get your databases to mount a volume and all your dev data disappears ;)


This is one of the features mentioned in the announcement.


Yes I was quoting, but I didn't format the text well.


Oh gosh, I thought it was a super odd comment :)


I run my stack(s) on Vagrant with Puppet for provisioning. I use OSX, but one of the major pain points of working with Linux VMs on a Windows host are file permission issues and case insensitivity.

I don't think Docker can do anything about case sensitivity, but with this new release will permissions differences be handled better?


Funny this appears today, I just discovered Veertu on the Mac App Store (http://veertu.com) 2 days ago and love it. It also uses OS X's new-ish hypervisor.framework feature to allow virtualization without kernel extensions or intrusive installs.


To be entirely honest, I'm quite concerned about your choice on choosing Alpine as the base distro. Their choice of using musl over glibc might be cool but if you have to put old libs inside a container, it's hell (if not entirely incompatible).


The use of musl on the host, outside the container, has absolutely no impact on the choice of libc inside the container. The team chose alpine because it's lightweight, well-maintained and security-oriented. You are free to use any distro you want inside the container, and that will never change.


apologies, you are absolutely right. Has been a very long day :)


Finally, I really hated the additional complexity and gotchas that boot2docker carried.


Why does signing up for the beta require agreeing to a non-disclosure agreement?


That's a left over item from the alpha testing. It's now been removed.


I couldn't sign up using Firefox on Windows. I'd enter a username, email and password then the form would just go blank on submission.


I should note that it worked fine on Chrome.


I really want to try this, but I'm unable to register. At the page where it says "Create your free Docker ID to get started" after I click Sign Up, the page just refreshes and my chosen ID becomes blank with no indication of what's wrong. I've chosen several different IDs and neither of them worked. Browser is Firefox 45.0.1 on Windows 7.


This is amazingly cool. We've been using docker at Reflect (shameless: https://reflect.io) since we started it and even if we didn't have all the cgroups features, it'd be super helpful just to be able to run the stack on my laptop directly instead of having the Vagrant indirection.


Private beta is behind a questionnaire, just FYI. You can't, unfortunately, download anything yet unless you get an invite.


We're onboarding people over time so that we can we can iterate on the beta as we go. The questionnaire is only asking for basic details (Name, Company, which version are you interested in). You do need a Docker Hub ID first.


How would wider distribution stop you from iterating? Larger support load?

Damnit man, we just want your beta, not your excuses :P


I've been running docker-machine with a VMWare Fusion VM with VT-x/EPT enabled and am using KVM inside my containers to dev/test cloud software. I'd be interested to know if I can still get the performance of Fusion and the support I need for nested virtualization out of Docker for Mac.


We do not currently have nested virtualization support.


This is great news to hear, I've been using a brew recipe that includes: brew install xhyve docker docker-compose docker-machine docker-machine-driver-xhyve to get close to what they're doing in this beta. Really looking forward to trying this out. Signed up for the beta!


I've always wondered about invites for open-source projects... that don't even open-source...


If I read correctly, docker for Mac is run on top on another visualization (xhyve, not VirtualBox) and docker for windows run on top of Hyper-V, which mean that it is not for production workload (at least for Windows).

So you can only use it for development. And it is close sourced. hmmm...


This announcement is about a beta for native apps on Mac and Windows. The idea is to allow you to work with Linux containers on your development machine of choice. The images/containers you build there are just as deployable elsewhere — i.e. production — as they were before (when people had to use docker-machine, Virtual Box, etc).


When I log in and go to https://beta.docker.com/form there is an empty form and js console says: Uncaught ReferenceError: MktoForms2 is not defined


Pause Ghostery and reload. This is unfortunate.


Yes, we're using Marketo for the sign ups and Ghostery blocks that.


Thank you - that did the job.


Hey, this might be unrelated but we had a bug on the sign-up form. It should be fixed now.


This is strange. I just created a Docker ID and as able to log into the regular hub but when I try to log into the beta, it keeps saying error.

Is there a user/password length limit? (I used a 30char user/password. 1password FTW).


Will there be an easy way to switch / upgrade from docker-machine with vbox without having to recreate all of my images and containers over again?

I know it's a small thing, but it's kind of a pain sometimes.


Yes there is a migration script.


Finally! I've spent the last month or so on Docker to learn about it as I am somewhat new in this environment. I'm just excited to try it out and have a more broad range of tools.


Thanks god for no more Virtualbox, that thing was a pig, endless amounts of networking and IO problems that lead every developer using it to come to my team for help.

also, Oracle.


Using docker on a mac always seemed to hackish b/c you had to run a separate VM. This seems like a step in the right direction and am excited to visit docker again!


Is the source code available? I don't see it at https://github.com/docker


From the post:

"Many of the OS-level integration innovations will be open sourced to the Docker community when these products are made generally available later this year."


So on Windows this runs Linux in their isolated environment? I just got excited thinking it meant Windows in Windows but it looks like that's not the case.


Would this be a good way to deploy a program based on opencv to nontechnical users? So far I haven't found a good way to do that


great news but I'm not sure a young startup should be wasting money on what was obviously a professionally produced launch video


This is HUGE! Looking forward to trying it out.


link says its Hyper-V on Windows, but then says Windows 10 only... Anyone know if Windows Server is also supported?


Kinda surprised they didn't just wait 7 days and announce this at Build with Microsoft.


I would like to see Windows docker images. Will this ever happen? Or can I do it already?


Windows images are in beta. They will be released later this year.


Microsoft has been hinting that they want to get rid of localhost loopback access from browsers (they took it out of Edge entirely for a while, now it can at least be enabled).

If they do decide to block browsers from accessing localhost, will that impact docker?


shut up and take my money!


i'll finally get rid of docker-machine, THANK YOU DOCKER.


Please do check out the documentation if you're a current docker-machine user. The link will be in the invite email that you'll receive along with the beta token.


Not-news (support for two new hypervisors implemented, already dodgy package altered) voted up to 718 points. God you people are sheep. I guess what we take from this is docker is getting desperate for newslines.


Is this relevant to my app developer community on Slack?


Why would be Windows Pro required?


Hyper-V VMs require Professional and higher OSes

which is usually like, what most people would have, and would consider the standard windows install. Windows Home is... very nutered to say the least


I judge Microsoft a bit for not just discontinuing Home.

Some of the missing features in Home are what I'd describe as "immoral." In that they aren't just luxuries, they're important parts of the OS or security features e.g.

- Group Policy Editor: This is the primary place to modify hundreds of local computer settings. They could have left out Domain Join, and kept the local Group Policy Editor.

- Start Screen Control with Group Policy: Adds more group policy options to modify the start screen/menu look/feel.

- Enterprise Mode Internet Explorer: Name notwithstanding, this allows people to use legacy webapps with modern IE.

- AppLocker: Security feature (isn't even in pro incidentally!). I'd turn it on.

- Bitlocker: Full drive encryption (with different decryption options).

- Credential Guard: Not used to protected non-domain credentials.

- Trusted Boot: Because home users don't get rootkits?

Windows 10 Home is categorically less secure than Windows 10 Pro, which is in turn less secure than Windows 10 Enterprise. Features like AppLocker, Credential Guard, Trusted Boot, are features that all versions of Windows could benefit from, and Bitlocker should be available and on by default.

When you have a "security" category in the feature list and are differentiating different versions of the OS then you really have to ask yourself how high you prioritise security in general.


can somebody provide a link for this app? I can't wait anymore! :D


still just VMs?


Unfortunately, despite the title, Docker still does not run natively on a Mac or on Windows. It runs only inside a Linux VM.

From the OP:

"The Docker engine is running in an Alpine Linux distribution on top of an xhyve Virtual Machine on Mac OS X or on a Hyper-V VM on Windows"


The difference between this and docker toolbox, is that important parts of docker are now native (in a way). Eg. filesystem and apparently networking.

This is achieved by using the native virtualisation support on OSX / Windows instead of Virtual Box and working closely with Apple and Microsoft.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: