I'm also in the beta, and I like it a lot. It's functionally equivalent to `dlite`, which Nathan LaFreniere has done an extremely good job on. He deserves massive credit for making OSX Docker dev bearable and for providing the inspiration for "Docker for Mac".
A few issues I've seen:
1. I cannot believe they are using `docker.local`. This hostname will cause nothing but trouble for years to come. DON'T USE `.local`! Apple has decided that `.local` belongs to Bonjour, and due to a longstanding bug with their IPv6 integration, you can expect to see a 5-10s random delay in your applications as Bonjour searches your local network to try to resolve `docker.local`. Yeah, you put it in your `/etc/hosts`? Doesn't matter. Still screws up. Use `docker.dev` or `local.docker`. [http://superuser.com/questions/370559/10-second-delay-for-lo...]
2. -beta8 is screwed up. It won't bind to its local ip anymore. The only option is to port forward from localhost. Unfortunately, Docker isn't offering a download of beta7. Thankfully, I still had the DMG around.
3. The polish is still lacking. Most menu bar items ask you to open up something else.
4. Why "Docker for Mac"? Couldn't the team think of a less confusing name? Now I have "Docker" running "docker".
Otherwise - great projects, and again, much credit to @nlf for `dlite`. If you're not part of the beta, check out dlite (https://github.com/nlf/dlite). It's at least as good as Docker for Mac.
> I cannot believe they are using `docker.local`. This hostname will cause nothing but trouble for years to come.
We are indeed moving away from `docker.local` in Docker for Mac. There have actually been two networking modes in there since the early betas: the first one uses the OSX vmnet framework to give your container a bridged DHCP lease ('nat' mode), and the second one dynamically translates Linux container traffic into OSX socket calls ('hostnet' or VPN compatibility mode).
Try to give hostnet mode a try by selecting "VPN compatibility" from the UI. This will bind containers to `localhost` on your Mac instead of `docker.local` and also let you publish your ports to the external network. One of our design goals has been to run Docker for Mac as sandboxed as possible, and so we cannot just modify the /etc/resolv.conf to introduce new system domains such as ".dev".
We've been iterating on the networking modes in the early betas to get this right, so beta9 should hopefully strike a good balance with its defaults. It's also why we've been holding a private beta, so that we can make these kinds of changes without disrupting huge numbers of users' workflows. Your feedback as we figure it out is very much appreciated!
A minor addition: to use 'localhost' in beta 8 you may need to also run the internal debug command:
pinata set native/port-forwarding true
In previous betas this setting was implied by "VPN compatibility" mode, but in beta 8 it was made an independent setting.
In beta 9 using localhost will be the default -- as avsm says we've been iterating on the network configuration trying to find the most compatible / least surprising defaults. Hopefully after beta 9 it will be stable on "localhost".
> It's also why we've been holding a private beta, so that we can make these kinds of changes without disrupting huge numbers of users' workflows.
Pretty much every developer's workflow is already heavily disrupted by not having access to beta. I've spent days trying to get my filesystem notifications up with dlite and Dinghy (succeeded with the latter). So whatever you guys do, it's still better than what's currently available.
I had signed up for Docker for Mac beta on the same day of the announcement. I haven't received an invite yet. Did I miss the invite, or isn't the invitation sent on the 'first come first serve' basis?
Not sure if that is important for the use-case at hand, but it was the reason why I stopped using .localhost in my LAN (in favor of just using plain hostnames, e.g. “x1”).
That is the multicast dns rfc, and says .local is used for zeroconf and not manual site-local configuration:
Any DNS query for a name ending with ".local." MUST be sent to the
mDNS IPv4 link-local multicast address 224.0.0.251 (or its IPv6
equivalent FF02::FB)
> The implementation of both approaches on the same network can be problematic, however, so resolving such names via “unicast” DNS servers has fallen into disfavor as computers, printers and other devices supporting zero-configuration networking (zeroconf) have become increasingly common.
Which seems to confirm what the original poster wrote - it sounds like a bad idea using it on OSX where it collides with bonjour.
I do not work on this project, but perhaps this was the move to have the "VPN compatibility mode" enabled by default.
Have you tried going to the settings menu and disabling?
I just installed docker for mac after getting an invite in this thread, and this is the _only_ issue I'm facing. If I access the IP, it works extremely well. But if I use `docker.local`, then it takes about 5-6 seconds to connect, most of that in name resolution.
I believe there's a .dev TLD coming, so don't use docker.dev, either.
Maybe it's time for an RFC that creates a guaranteed-not-to-be-public TLD, kinda like test.com? Until then, docker.<completely-NSFW-slur> might be your best bet, actually.
Same experience here, both on Mac and Windows. They've done a great job making it "just work". The user interface pieces are a bit raw -- perhaps "minimalist" or "unobtrusive" would put that in a better light! -- but clearly most of the work has gone into the lower level integration, where it shines.
Docker for Mac/Windows, once released, will nuke the ick factor on those platforms from orbit, which can only lead to even more adoption.
I hope there's going to be an easy way to package this with your own docker image in order to have a new way to distribute applications. My usecase is running a server locally so you can use a webapp with local network speed and offline access and lots of local storage.
Interesting, this is the first time I'm reading about it [1]. Well, if anything it looks like a web app would have to be rebuilt from the ground up to fit that model. I haven't yet read much about it, but here's a few questions that pop up immediately:
1) If you have a container per data object, doesn't that mean you also have to start a process every time a user opens a document? So forget about doing any computations in the setup. Even just things like using regex patterns in python (that need to be compiled once) or using anything with a VM (that needs to be started) you'd have to give up. Usecase seems to be extremely limited, but maybe I'm not getting something right here.
2) How do you handle indexes, views and collections over a large set of data objects?
It turns out that converting a web app to Sandstorm is mostly deleting code. You delete your user management, your collection management, your access control, etc. What you have left is the essence of your app -- the UX for manipulating your core data model, of which you now only need to worry about one instance.
> doesn't that mean you also have to start a process every time a user opens a document?
Most apps we've encountered only take a couple seconds to start. But we're working on a trick where we snapshot the process after startup and start each grain from the snapshot, thus essentially optimizing away any startup-time slowness.
> How do you handle indexes, views and collections over a large set of data objects?
Sandstorm is (currently) designed for productivity apps, not for "big data" processing. The data within a single grain is usually small. That said, you can run whatever database you want inside the grain.
> Most apps we've encountered only take a couple seconds to start. But we're working on a trick where we snapshot the process after startup and start each grain from the snapshot, thus essentially optimizing away any startup-time slowness.
This is very interesting. I've been looking for something like this since 2007 for optimizing the startup time of some apps. However I couldn't find any suitable technologies for this purpose; VM memory snapshotting is heavyweight and is slower than starting an app from scratch, OS-level tools like cryopid can't even be called alpha-level. What kind of snapshot technology do you intend to use, and how confident are you that it will work well?
Thank you for your response. Sorry, I'm still not getting it. Not even talking about big data, what I mean is a very simple database application, say with 10k documents in a document based storage model. Now let's say I want to show a list view and a sum over one of the fields in those documents. You'd probably agree that this is a typical usecase for a productivity app. Now how would I do this in the sandstorm model?
> You'd probably agree that this is a typical usecase for a productivity app.
Actually I can't think of any app that matches the abstract problem you describe. Can you give a specific example? Usually the answer to these problems becomes much clearer in the context of a specific app.
Pretty much anything you'd like to do with CouchDB or MongoDB. Replace "document based storage" with table based and you have pretty much any *SQL application.
So I guess my question is: How do you expect people to use your fine grained model with databases? If the answer is "not at all" then I find the scope too limiting. If the answer is "1 grain == 1 db" then I find the claim that you are solving difficult permission problems to be false.
Note: I don't want to be too critical here, I'm just trying to pick holes in your claims of scope so I can categorise the power of sandstorm and how far it could be useful for things I'd like to build.
> Pretty much anything you'd like to do with CouchDB or MongoDB
Wekan is a Trello clone that uses MongoDB for storage. On Sandstorm, each board lives in a grain, so there ends up being one MongoDB per board. This works fine. The only thing stand-alone Wekan ever did that queried multiple boards at once is display the user's list of all boards. On Sandstorm, displaying the user's grain list is Sandstorm's job, not Wekan's -- and indeed, usually the user is more interested in seeing the list of all their grains rather than just the Wekan boards, so delegating this to Sandstorm is a UX win.
If that is not the kind of example you have in mind, then you really need to give a specific example.
I've noticed some pretty extreme performance penalties with Docker for Mac. Wherein VirtualBox would spin <60% CPU idling a bunch of services (MySQL, RabbitMQ, Redis, Elasticsearch, Memcached, several Python daemons) - Docker for Mac's driver hovers around 100% (spiking often to 200/300%) with another 20-30% (spiking to 50-80%) on the osxfs.
I'm going to guess it'll get better in time. It would be nice to get some insight into just what is burning CPU cycles. The experience besides that was really top notch IMO.
The early betas focussed on feature completeness rather than performance for filesystem sharing. In particular, we have implemented a new "osxfs" that implements bidirectional translation between Linux and OSX filesystems, including inotify/FSEvents and uid/guid mapping between the host and the container. Getting the semantics right took a while, and all the recent betas have been steadily gaining in performance as we implement more optimisations in the data paths.
If you do spot any pathological "spinning cases" where a particular container operation appears to spiking the CPU more than it should be in, we'd like to know about it so we can fix it. Reproducible Dockerfiles on the Hub are particularly appreciated so that we can add them to the regression tests.
Previously I had permission issues (Only root can write to host) when mounted a folder with OSX filesystem. Hope this would fix those issues. I'm talking about this issue https://github.com/boot2docker/boot2docker/issues/581
Our approach is to focus on functionality and correctness first, and then improve performance over time.
We're building up a suite of performance benchmarks to help us track progress-- are there particular benchmarks that you would recommend we add? I'll certainly add "CPU load while idling" to the list.
Maybe something like a Django, Rails, etc. DEBUG=True dev server - they tend to poll for file changes which could really tax your osxfs implementation at "idle".
> Our approach is to focus on functionality and correctness first, and then improve performance over time.
Just my two cents: I think that the interface and features are excellent and a joy to use, but performance was a show-stopper that forced me to quit using the Beta for Rails-application development:
- a simple request that took ~1/3 of a second using VirtualBox and Docker Machine took six seconds on Docker for Mac
- a more complex request went from one second to twelve seconds
I didn't have a chance to dig into it, but I would guess that it has something to do w/ osxfs and the many files that Rails loads, as it reminded me of the difference between using sharing files via VirtualBox's own file sharing vs. NFS, the latter being a significant improvement.
I've noticed this as well. It seems like the docker.local hostname can sometimes take a long time to resolve. If I resolve that hostname and visit the IP directly it generally works great.
Definitely check out the beta forums as most of the issues I've found have already been reported with workarounds.
I'm wondering how Docker for Mac handles conflicting entries in /etc/exports. This is currently still a problem when using dlite and vagrant on the same host[1].
Oh, really? I've been running boot2docker-xhyve for months now, and performance-wise it's been far better than VirtualBox. I wonder if its a quirk of how Docker for Mac is set up?
Sounds promising. But I'd like to see Docker work with Microsoft to produce something even better for Windows, using the new Windows Subsystem for Linux (WSL). With WSL, Docker and Microsoft should be able to bring Linux-based Docker containers to Windows, without the performance hit and resource fragmentation that inevitably come with virtualization. True, WSL doesn't support namespaces and cgroups, but IIUC, Windows itself has equivalent features. So the Docker daemon would run under Windows, and would use a native Windows API to create containers, each of which would use a separate WSL environment to run Linux binaries. I don't know how layered images would be supported; Microsoft might have to implement a union filesystem.
Native Docker support in Windows will be available in the next Windows server release, and is already available in the Windows technical preview bits (since TP4).
To be clear, Windows containers allow you to run containers that contain Windows executables under Windows. mwcampbell is talking about running containers that contain Linux executables on Windows.
I guess the only thing that would prevent this from working is having the linux subsystem available on the host (and of course support for all the syscalls being made).
What is the use case though? What would be even better is if MS created a "windows container" that could run under Linux, then you could just ditch windows all together.
I don't see big companies using something this hackish for containers that are running on servers anyway. For working on the desktop this might come in handy for devs, but honestly I think MS should focus their energy on something else.
Keep in mind, Linux containers work since there's only one Linux kernel, and the rest of the OS is just files that can be stuck into the container. Anything that can pretend to be the Linux kernel (like a Solaris "branded zone") can run a Linux container.
But you'd actually need many different kinds of "Windows container", since Windows actually has an abundance of kernel-exposed runtimes: the DOS VMM, Win16 with cooperative threading, Win32 with COM, WinNT, WinRT, the POSIX subsystem...
You could certainly write a particular container runtime to allow a specific type of app (e.g. WinRT apps) to run, and that might be enough to enable developers going forward to target both Windows and Linux hosts for their Windows apps. But that would hardly be Windows, in the sense of being able to have your app launch arbitrary other "Windows" processes in the same container the way that Docker apps do with arbitrary Linux processes.
Having all the machinery to simulate all the vagueries that have changed in the Windows OS core over time, such that one container could contain any and all Windows processes running together, would be a much harder challenge. I don't know what the combined surface area of all the runtimes the Windows kernel exposes looks like, but I can't imagine it'd be something even MS could re-implement as a Linux-kernel translation layer easily (especially considering all the compatibility shims each layer provides to make specific apps work, that would have to be carried forward into the translation layer.)
The same use cases as Linux. Being able to isolate applications, being able to allocate resources more fluidly, being able to centralize management all the better. Windows has actually gained some real nice virtualization management over the past few versions, and this would fit right in with them.
Why do people value that so much? I really don't care if a tiny VMis running in the background.
Also, running that VM gives me more confidence that it will also run on the production machine (since they use the same kernel and the same docker version).
The only problem I had with docker was that I did not use to support shared volumes that are outside the home folder on Mac (I think they changed that now, but I'm not sure).
There is still a tiny VM running. This one happens to be the Native OS X Hypervisor Framework. From the docs:
> Hypervisor (Hypervisor.framework). The Hypervisor framework allows virtualization vendors to build virtualization solutions on top of OS X without needing to deploy third-party kernel extensions (KEXTs). Included is a lightweight hypervisor that enables virtualization of the host CPUs.
I've had a great run with VirtualBox, between Vagrant and Docker Machine. But I can't lie, I won't miss its installer, uninstaller, OS X kernel extensions, questionable network file sharing, and more. Removing a big blob of software between me and my virtualization-ready CPU is progress.
Then Docker for Mac is the one-two punch. Simpler virtualization, extremely rich containerization.
If you want to run services in docker containers with Docker Toolbox (e.g. a mysql db), and you want the db stored on the Mac host, then you have to worry about 2 layers of folder mounts (one from host -> vm, one from vm -> container), another 2 layers of port forwarding (same as above), to make it 'feel' like your're running mysql locally.
With the beta, all of that is taken care for you with a couple of settings, and it's just much simpler to get up and running.
Even boot2docker (docker machines predecessor) could do that. It shared the folders correctly and managed port forwarding. Of course localhost stop working but I just added the ip in /etc/hosts and enter docker.local instead (the docks say the IP might change but that's never happened to me).
Except on Mac, where it sometimes doesn't. Have you not run into permissions issues? Or having a file-watcher process running in the container not being triggered by changes from the host? There are a tonne of quirks that came with boot2docker. It was worth it, of course, but they were still there; and this app is specifically aiming to address them.
Docker For Mac also does some other cool things, for instance exposed container ports are available at the address `docker.local`. Solutions to help deal with filesystem permission mis-matches, filesystem notifications, and VPN compatibility (all of which are things VirtualBox struggles with) are also being baked in.
It does in the latest update. Just run "pinata set native/port-forwarding true" and your docker containers will be accessible via localhost.
Previously this only worked with the VPN compatibility mode, but it's available on its own now.
I also recommend running "pinata set network nat".
VirtualBox and VMWare are hardly "tiny." I've never had a good experience with VirtualBox on any platform. Thing is constantly broken and endlessly updated to break in newer, less Google-able ways and just causes never-ending grief in unexpected places for me.
You're still running a VM with this, just via xhyve and the OSX Hypervisor framework, rather than via Virtualbox or VMWare.
Which actually makes me wonder how they're managing memory for the VM hosting Docker here. Are they specifying a set fixed allocation? Is memory usage configurable somewhere?
Not quite configurable yet, but it appears that's the intention. (I wouldn't be surprised if they'll try to make that a bit more dynamic, if the hypervisor framework allows it.)
Not necessarily, if I allocate 8GB to a VM, my systems still runs smoothly even when I'm already using 10GB (I have 16GB).
One problem is that the VM won't clean up the RAM properly but the swap can handle that quite nicely actually. And I never utilize 16GB of ram, 8 would be more then enough for me and ram is really cheap these days (even on macs).
2 windows with 20-40 tabs each. Doesn't take up that much ram (less then 2GB). I never have more then 2 Windows of any application (e.g. Only 2 terminal Windows) because it breaks the shortcut to get to the previous window
I've been using the Mac Beta for a few weeks and I can also say it's great. Install is easy and it just works. It's such a relief being able to do dev work directly on my machine without docker-machine/VirtualBox. I've been hitting it with a variety of Ubuntu-based containers without any issues.
Author here. I was using that prior to getting in the beta. Tremendous work went into that driver, so I'm happy to see the techniques get picked up elsewhere.
The touted "native" is not what it is all cracked up to be. Maybe windows is a plus that brings a few souls into the fold, but I've been looking for OSX performance ratings and only found some comments here and there that are like my experience.
At my El Capitano, the exact same setup in Docker Beta takes roughly ten times to do its thing than my more flexible vbox setup did. A java stack (Jenkins) starts in about 1.5 minutes, but with Docker Beta it takes 15 minutes or about!
So, my docker-machine setup lets me see my hosts with vbox, manage them with docker-machine, and get the NFS tweaked with docker-machine-nfs. boot2docker OS is nice and small and works.
So for me this is quite a contrast with the 'native' Alpine images based Beta. Which in my 5-hour stint with it did not show much way to overview or inspect it without getting new/more gear.
I have Docker for Windows Beta, but when I've installed it on my Surface Pro 3, it immediately caused the device to get stuck in a BSOD loop. I think it has something to do with Hyper-V and connnected standby but I'm not 100% sure. Wasn't able to find an answer because it's so early on. I really want to get into Docker, but that bug has killed any possibility of me adopting it as of right now. I did install it on a desktop (which I lightly use) and it worked fine. With the new Windows 10 Insider build on that desktop though, Docker constantly is asking permission to run.
Anyhow, I really hope someone does a good overview of Docker for Windows beta, as well as the Ubuntu environment within Windows 10 now...Seems like OSX gets all of the dev love, so I'm wish and hoping for a really nice Windows overview. As I am currently having a hard time with both. Neither, as of right now, work well.
if you want a more technical review of the Windows beta, read this: http://docker-saigon.github.io/post/Docker-Beta/
But note that beta 8 was released a few days after that review and already introduced some changes.
Also, for the Windows beta, it very much is still a beta.
I'm sorry for your laptop experience and glad you got it working on your desktop. There are good chances it is related to Hyper-V but would need more info to debug. Could you send us your logs to beta-feedback@docker.com?
Docker for Windows still requires elevated privileges to start. This will be addressed in a couple of releases
I started playing around with Docker for Mac in an attempt to get my whole dev environment set up in Docker. It was really slick, especially being (re-)introduced to docker-compose which makes connecting containers very easy.
There is a ton of potential there. My biggest challenge is that the documentation hasn't quite caught up to all of the interesting stuff that is going on. I'd certainly welcome some more opinionated answers for how to develop on Docker. Specifically: how to not run apps as root, as almost all examples use root and permissions are annoying if you don't do so; how to use docker containers for both dev and prod; best practices for getting ssh key access into a container during the build phase.
But much of it Just Works at this point, I'm pretty confident that the best practices will catch up in time.
I'm a node.js developer. I understand the benefit of using docker for deployments or CI testing, but I have yet to be convinced of the benefits of using it for development on my local machine.
I install node, postgres, and redis natively and it all works fine. What benefits does docker provide to my workflow?
>I install node, postgres, and redis natively and it all works fine. What benefits does docker provide to my workflow?
Isn't it obvious?
With docker (or vagrant, or at least a VM etc) you can have the SAME environment as the deployment one. If you run OS X or Windows your direct local installs with differ in numerous ways to your deployment. And same if you run Linux but not the same distro or the same release.
And that's just the start.
Who said you'd be working in only one deployment/app at the time? If you need two different environments -- it could be even while working on version 2.0 of the same web app with new technologies--, e.g. one with Node 4 and one with Node 5, or a different postgres version, etc, you suddenly have to juggle all of these in your desktop OS.
Now you need to add custom ways to switch between them (e.g. can't have 2 postgres running on the same port at the same time), some will be incompatible to install together etc.
Without a vm/docker you also don't have snapshots (stored versions of the whole system installed, configured, and "frozen")...
Having dev servers set up the same as production makes sure that none of the little gotchas pop up that can cause problems. You can more readily guarantee that the version of every part of the stack is the same, and that the configurations are the same. One of the things this lets you do is work deeper in the stack without nearly as many concerns. You can test config tweaks, hand-rolled builds, etc, with knowing that a rollback is just an rm -rf and untar away, or a finalized config change is expressed as a single diff.
I run a docker-compose file and never, ever have to install node/postgres/redis myself, nor make sure I'm using the right version or have the right configuration files.
I pass the repo, with the Dockerfile and docker-compose.yml over to another developer and they do the same thing. They don't spend hours getting node/postgres/redis/whatever set up then fight environment issues to match what my, or staging, or our production environment are.
installing them natively is clunky, they are managed in all different ways depending on how you install them. They need to started and stopped manually. You also are turning your machine into a "snowflake" with unique combinations of os and service versions.
With docker, you can make a dockerfile for your project and make it painless and consistent to run anywhere. You can also create a docker-compose file if you need other services like redis. It really is the holy-grail once it clicks.
Do you put your applications in production yourself, or do you have an operations team which takes care of it for you?
If you develop everything using Docker, running the containers on another (Linux) computer is much easier as everything is already prepared and ready to bundle up and deploy.
If you develop on Linux and deploy to Linux, don't you feel you will catch kernel and other OS-specific issues much faster? i.e. Before they become a problem in production?
Wait, not even another kind of VM? Have you ever had to work with a team that has an unreliable environment? And had to walk them through debugging an error message for installing one of them, or wanted to add something with further dependencies?
(Not to say Docker's immune from that; the sudden deprecation of docker-compose for docker-machine was a nasty surprise.)
Yes, we screwed up on the b2d->machine transition. Sorry for making that experience unnecessarily confusing. We've learned a lot from that mistake and are working hard to avoid repeating it.
when you have 15 of those things start to make sense. I used vagrant in school just so that I wouldn't have any lasting tweaks of db's and weird things you end up doing. Also, with a provisioning script, I can get my projects running to this day. My snobol, smalltalk and scheme projects all can be run by just running vagrant up. I don't have to make sure that my current machine has all of the dependencies.
When we developed an angular and java site, I set up vagrant to configure tomcat, node, java, and all of the plugins required to get tomcat and maven to be nice together. Did it once, and then everyone else with a unixy platform were able to not spend time on dealing with that. Now that the class is over, all of that is removed from my machine but I can always just crank it back up in the time it takes to install all of those dependencies.
For anyone wanting to use all this cool stuff without waiting for the release, check out nlf/dlite https://github.com/nlf/dlite which has the xhyve implementation already.
VMWare Fusion does a few extremely useful things that Docker doesn't - for instance it can hook into an existing Boot Camp install and load it as a VM. It may be a bit heavy for the (Linux) applications Docker excels at, but for me it's worth the money just for the Windows VM support.
Anybody know if there is a full guide for the migration from toolbox to Mac beta? I've installed the beta, but I'm wondering if there's old cruft that I'll need to uninstall to be completely on the new.
I signed up for the Beta, but I have not gotten access yet. I was hoping to see a walk through of an example on your review so I could gauge how easy it is compared to the old docker setup on osx.
So if I'm using dlite now, and I want to transition to Docker for Mac once I get into the Beta...what do I need to do? Fully uninstall dlite? Can they be run side by side? (assuming no)
I am not sure if they conflict, there may be an issue with them both trying to use the same docker socket though, but you can probably just start one after you stop the other.
Good review. One thing mentioned is that the author was able to remove Kitematic amongst other things. Kitematic is a GUI for Docker. There is currently no replacement for it.
I was under the assumption that Kitematic wouldn't work with the beta but low and behold it does. I'm not sure what it will do if you have the old Docker toolbox installed at the same time however.
I got an impression that this is not that useful for development due to very weak networking support.
For example I use a single docker installation in a VM to test several unrelated projects with all of them providing a web server on a port 80/443. I do not want to remap ports not to deviate from the production config. Instead I added several IP to the VM and exposed relevant containers on own IP addresses. Then for testing I use a custom /etc/hosts that overwrites production names with VM's IP addresses. This works very nicely.
But I do not see that something like this is possible with "Docker for Mac".
My only issues so far have been: 1) the docker.local issue on mac (as a fee others have mentioned) and 2) I still get some vpn issues with Cisco Anyconnect
Oh...that's interesting. That would even be useful on Linux to enable greater resource separation between processes...say being able to lock all Docker processes down to 1-2 cores on a machine and with a hard memory limit they can't exceed.
What you mention is a major reason why Linux containers were invented, and is already possible with Docker today. Take a look at the '--mem-limit' or '--cpu-shares' flags for 'docker run', for instance.
I would bet money that it does not. The Virtualbox based solutions all create one VM and run all containers there -- why would they stop doing that just because they're using a different means of virtualization?
if you want a more technical review of the Windows beta, read this: http://docker-saigon.github.io/post/Docker-Beta/
But note that beta 8 was released a few days after that review and already introduced some changes.
Not really, unless you are going to be running OS X in your server. You want the container in your development machine to be as close to the one that'll run in production as possible to minimize "Works on my Machine" issues.
I think docker for Mac is the way to go but, until disk performance is not up to scratch, I suggest you have a look at dinghy (https://github.com/codekitchen/dinghy) it just works too and it's 10x faster than docker-machine with vmware/virtualbox shares (uses NFS).
Can anybody in HN provide a quickpath into the beta? I signed up when it was first announced (seems to be over 30 days ago: https://news.ycombinator.com/item?id=11352389) but haven't heard anything back yet.
IMPORTANT: as suggested in another comment, please signup for the beta before responding.
Thank you all for investing your time in this thread, it is incredibly rewarding for the team to see their work appreciated and used by our fellow hackers. Constructive criticism with suggestions for improvement are the absolute best, thank you for those in particular.
WestCoastJustin, a lot of your comments got marked as dupes, because they are identical to each other.
I've gone through and vouched them for you, so now they will show up, but new ones will have the same problem. Perhaps you can modify each reply slightly to avoid tripping the detector?
shykes, it might be useful to add a note that people need to sign up to the beta first!
Ah, thanks very much! FYI for anyone reading this. You'll need to have signed up at https://beta.docker.com/ first, then post your Docker Hub ID (username), or send email to feedback+hn@docker.com. We cannot send beta tokens if you have not signed up to the beta first.
Hello -- I lead a team developing a very complex enterprise application. We have already spent a lot of time with Docker to simplify our CI system. I'd love to join the Docker for OSX beta to figure out how we can all start using it for development. This will enable us to delete tons of ugly code.
Can't wait to see power of new Hypervisor. I student, played with Docker for a year, and this seems like a really big deal for me, so I can neatly organize my whole development environment and learn basics of containerization and deployment along the way!
Really excited to see things built on top of Hypervisor.framework. I work with genomics researchers and I'm always encouraging Docker images for reproducible analyses. Starting workflows on OS X that can scale beyond laptops is really interesting! Thanks!
I signed up first day (I actually signed again today hope that doesn't matter) but still haven't hear anything yet.
We are using docker mostly for developing and integration testing in Redhat. But it would be handy to try out better Mac experience.
My personal docker ID is zkjiang, thanks.
Signed up on March 25th and continue to check my inbox every day. So keen to get my org. using Docker but the dev experience has been a blocker to date so very eager to get them on board!
Hello, I'm working on a project for a client with a very restrictive vpn that makes extensive use of docker. I would love to get my hands on the docker beta to try out the vpn compatibility. - thanks!
Dockerid: jzmartin84
Thanks for the offer, shykes! I'm currently using Docker to emulate and test a server setup (multiple web servers talking to multiple databases) locally and I plan on using it to check cross-compiled linux binaries (using Rust and MUSL).
My company has an increasingly complex SOA setup that right now is developed in an ad-hoc basis, but with containers we expect to really streamline our dev setup and dev/production parity. We're all on Macs so this is would be pretty cool to get access to.
I'd love to try it out. I've been struggling with Docker ever since I moved from Linux to Mac. Especially with setting up dev environments for our projects. VirtualBox shared folders just don't cut it.
Would love to try this out on my mac. Eagerly waiting for the confirmation mail since I registered. Use case is to transform my development environment and get rid of all those local installs.
I'd love to get included in the beta. We're in the process of splitting up a monolith into separate applications. Docker for Mac/Windows would ease this transition immensely!
Amazing! My dockerhub id is `brodsky`. We are a well-established web product running docker (almost) across the board, and currently exploring CI/CD use cases. Would love some beta love!
Hey! could you shoot an invite? We're fighting with Vagrant for our development environments and considering moving slowly into Dockerizing all the things to unify them with production deployment :)
Docker ID: alfonso
I hope I'm not too late, I'd like to contribute to the development tools.
Is Docker for Mac going to be open source? why a private beta ?
Thanks!
Id love to be fast tracked - most of the people at my current employer use macs for day to day work, so having a slick native use case would be a godsend!
My ID is the same as my name here, karunamon. Thanks!
Would be awesome to be able to try it out. Right now VMWare Fusion with a full linux and Docker inside. Had problems with the filesystem integration when using docker machine.
I'd like to get bumped to the top. Currently researching docker for my company and having it work natively on mac would be awesome. My docker id is suneilp
I would prefer to ensure my colleagues go through one transition in their developer setup, rather than two.
The vagrant-based docker machine system needs some work to be a smooth experience, such as NFS workarounds for shared file system performance, file watching and so on.
Question: Is this compatible with OS X 10.9.x or earlier? Or is it only for the latest pieces of shit that are 10.10/10.11?
edit thanks for the correction netheril96!
I'm curious about the state of compatibility because I've drawn a line in the sand -- and refuse to upgrade from 10.9 (since many things seem to be getting only worse and less stable in MAC OS land :).
This "review" is the technical equivalent of a YouTube unboxing video. Screenshot, screenshot, something I already knew from reading the press release, screenshot, platitude, one big technical error in the conclusion, and done.
If it really worked (especially on Windows) Docker would post the binaries instead of treating this like Wonka Golden Tickets. Love Docker, am actually waiting to be approved so I can get to building something, but posts like this are a symptom of a larger problem.
The reason we are keeping the beta private is because we don't believe the quality is good enough yet to "open the floodgates". We are sending as many invites as the engineers are comfortable with - currently that's several thousands per day. As we hit more and more edge cases (performance, stability, support for unusual configurations...) we are expanding the pool as fast as we can.
I appreciate this but in that case you should collect configuration data as part of requesting the bits. If I'm on a machine that's known borked, I'll wait. If not, gimme and I'll help you fix. Otherwise it feels like you're just using this as a marketing trick to build buzz.
I'm deep in the weeds with Docker, LXC, containers, hypervisors all the time. Content about those layers very few people care about... These are tools that are supposed to mostly stay out of the way after all.
Interest in Docker packaging their app as a nice mac app, and people understanding how the install process works is not a problem, it's a sign that these tools are finally becoming digestable by all.
There are still some rough edges, crashes when you resume from sleep (fixed in the latest update I might add), things like that. It's pretty close to an open beta in my opinion.
Until Dockrap (and its various predecessors and friends) are compatible with IT depts setting their VPN config to disallow local network access, fuck all that hipster stuff and use good old self-configured services. Hipster bullshit that can't be used in any org that remotely takes care of their network (hint: any big corporation will mandate this by contracts with huge liability figures in the contracts).
To the users: if you can't configure a simple local web server for development, you should not be qualified to develop a web service. Period. (Hint: Apache and PHP is included in OS X, but a bit outdated to be fair - but you can install/upgrade to a OAMP stack using MacPorts in MINUTES)
I know how macports works. I just don't want my whole team to deal with small differences which cause the dreaded "works on my machine". I'm not saying docker is perfect, but at least those guys[Docker inc.] are trying to make a shift to something at least a bit more immutable then what you are suggesting.
Bigco needs to get with it or get out of the way. Whenever a bloated, antiquated dinosaur is replaced by a newer, smarter company the sun shines a little brighter in my world.
> Whenever a bloated, antiquated dinosaur is replaced by a newer, smarter company the sun shines a little brighter in my world.
And whenever a "newer, smarter company" disregards fundamental IT security practices and gets hacked, the sun shines a little brighter in my world.
Security is an afterthought (if a thought at all) in many hipster operations, and it's about time someone fucks up so badly that IT security is priority #1 from the beginning.
A few issues I've seen:
1. I cannot believe they are using `docker.local`. This hostname will cause nothing but trouble for years to come. DON'T USE `.local`! Apple has decided that `.local` belongs to Bonjour, and due to a longstanding bug with their IPv6 integration, you can expect to see a 5-10s random delay in your applications as Bonjour searches your local network to try to resolve `docker.local`. Yeah, you put it in your `/etc/hosts`? Doesn't matter. Still screws up. Use `docker.dev` or `local.docker`. [http://superuser.com/questions/370559/10-second-delay-for-lo...]
2. -beta8 is screwed up. It won't bind to its local ip anymore. The only option is to port forward from localhost. Unfortunately, Docker isn't offering a download of beta7. Thankfully, I still had the DMG around. 3. The polish is still lacking. Most menu bar items ask you to open up something else. 4. Why "Docker for Mac"? Couldn't the team think of a less confusing name? Now I have "Docker" running "docker".
Otherwise - great projects, and again, much credit to @nlf for `dlite`. If you're not part of the beta, check out dlite (https://github.com/nlf/dlite). It's at least as good as Docker for Mac.