That's not a big deal. I imagine that the average OS user doesn't need Apache and PHP. People who do web development use Docker, because for any serious development you'd want your environment to match live one. And there's still an option to use brew or MacPorts to easily install all those programs.
I believe most macOS PHP developers use Homebrew. Some in combination with Laravel Valet for easy version switching and local domain management.
Docker is horribly slow on macOS. I'd wish that on no developer. We had to use docker for one project, but quickly decided to just run the tests in Docker and let the developers figure their environments out by themselves.
If you wonder why that is, it's because Docker for Mac actually runs the Docker engine inside a Linux virtual machine.
That means that whenever you call Docker, it has to copy your "context" to the virtual machine, then actually run the Docker invocation inside the VM. This gets slow and annoying very fast for even 10MB "contexts".
"Context" is anything in your working directory, more or less.
Another common slow down is bind mounts. Each file system operation is an RPC request between Docker for Mac and the Linux VM.
I’ve never been super happy with Docker for Mac. I did a pretty deep dive into figuring out why it’s so slow with no satisfying conclusion: https://stackoverflow.com/q/58277794/30900
This is also true for Windows. WSL2 made containers faster, but FS is painfully slow. Sounds like the best thing (apart from switching) is running Linux in a VM (Hyper-V does OK) and having IDE, Docker and all the data in there.
> WSL2 made containers faster, but FS is painfully slow.
If you keep your code base in WSL 2's file system it's really really fast. Even thousands of tiny asset files will get picked up and compiled through multiple Webpack loaders in ~100ms on 5+ year old hardware.
What I really had an issue with was the timespan between modifying a file in an editor (running in Windows) and the change being actually present in the container. I had to restart tests way too often because the old code was executing, which was really annonying, as sometimes you can't really tell if it failed because of the old code, or because the implementation was wrong.
The problem here is the delay between Windows and WSL2, once the files are in WSL2, it's fine.
FreeBSD once had Docker working through the Linux compatibility layer. No virtualisation. I always thought would have been really cool to see that ported to macOS.
That depends a lot on weather the freeBSD compatiblity layer is kernel or user space as MacOS contrary to common space does not use an freeBSD derived kernel but one derived from the CMU mach64 project, and yes i know jobs said otherwise but he was never a trustworthy source for anything.
I think macos actually uses an different binary executable format then the elf64 one shared between linux and freebsd, so the kind of binary level compatiblity that linux and freebsd share might not be shared between macos and freebsd.
This continues to baffle me. I have an Apple laptop, I run macOS on my laptop, but I do all my development on a cloud VM.
I'm surprised to learn that there are developers out there, that have the cash for Apple hardware, but don't have the cash or connectivity to not run more than an IDE locally, with everything else happening remotely.
My devices (laptops, tablets) are all glorified thin clients as far as development work goes. The meat never happens locally.
Am I a rare case? Is there reasons why this isn't palatable to most people that I'm missing besides cost + connectivity? Are most people genuinely still not got the option of decent connectivity (either fixed or wireless or a combination of the two)?
> don't have the cash or connectivity to not run more than an IDE locally
It's not about lack of money. I prefer developing everything locally because it feels snappier to me, even with a good internet connection, or even a local server. It might not make a difference to you but that's what I prefer.
Thanks for the responses, it's why I'm asking, as I genuinely don't get it.
I think my view stems from the days of having to re-install my windows workstation every 6-12 months in order to regain decent performance, so moving as much as I can to be on a 'different host' (usually a local linux server) to minimise the pain of backups/restores when rebuilding the workstation.
You definitely do not have to re-install Windows every 6-12 months for decent performance. Just don't install every doodad and hopefully don't have corporate IT pushing 10 management applications running in the background.
Developing in a cloud VM is painful in other regards, specifically with regards to IDEs. Basically your options become to use a local IDE, with slow access to your files (not fun when PHPStorm needs to re-index your vendor directory), or to use a cloud IDE (none of which I know of are particularly good for PHP, nor as snappy as running your IDE locally).
Of course, you can just use a text editor instead of an IDE, but once you get used to being able to jump to definitions, get method signature autocompletion, refactoring, syntax checks etc, it's kind of hard to go back to just a text editor.
I’ve found the best middle ground is to use a Mac and then mostly develop in a local VM. Snapshots/etc are wonderful, and they can be transferred from machine to machine, so “setting up my development environment” is as simple as “install Parallels.”
> I prefer developing everything locally because it feels snappier to me, even with a good internet connection…
oarsinsync’s IDE is sending each keystroke from his local computer to a cloud machine, where the source code lives. That source code compiles, executes, is tested in the cloud. Is this the setup you’re comparing with?
The theory behind this is sound: “When the size of the program is smaller than the data, move the program to the data.” In this particular instance, the code edit keystrokes are smaller than the total amount of source code. If the complete source code, packaged or compiled program has to be moved to the cloud anyway, it saves a lot of data transfer to just move the edits.
This assumes you’re running your application in the cloud, and the trade-off is that you need a reliable network connection, otherwise you might find yourself unable to edit when the network is down.
I believe when latency is a concern then the calculous skews much more in the direction of providing feedback locally based on the data. For instance a reasonable strategy could be to make updates to the file locally in the ide in order to provide immediate feedback as the user types, but then send those updates to the server where other higher latency features such as code completion or diagnostics are run. Sadly I have yet to see such a setup.
I use "cloud" as a catch-all that covers local and remote VMs, that are all built using standard templates. My local VMs are LAN-local, not host-local. My remote VMs are all <10ms away.
My particular workflow involves running my IDE locally, and having files hosted remotely. My IDE is plenty snappy, running my code is plenty snappy, but I'm slowed down by a need to commit + push changes to a repository.
I have it on my stack to do something like syncthing to keep a local + remote cache without needing to explicitly go through version control, but I suspect that'll just shift the latency out of my workflow, and trip me up in different ways.
I do my dev locally. It's so much faster and I have a dynamic IP so the work of setting up a private VPN or resetting the firewall every day would drive me mad. I've been thinking about setting something up so I can do dev work outside during the nice weather on a highly portable but underpowered laptop, but so far my un willingness to go through the effort of setting it up exceeds my desire to have it set up. (In the past I've handled this quite fine with a powerful laptop. But right now my powerful laptop has zero nanoseconds battery power and the idea of discarding an otherwise working laptop bothers me on environmental grounds. It's approaching it's fourth year of life, but I can't find anything that obviously exceeds its specs.)
It also means I just don't have to worry about things when the internet goes down. Back in the olden days of working in an office (at a company where most people took the work at home option), I can remember how often the other staff would ask me "is the internet down" and my answer would be "I don't know, let me check". My home internet connection only seems to go down for the moment the the IP changes, but office internet connections seem to be subject to IT staff that need to constant change something, upgrades, who knows what the excuse is today.
However, I do my "local" work on a virtual machine or a docker container. I use GNU/Linux as the dev OS and as the test/production OS, but that's because I'm using what I'm comfortable with - there's no technical reason I should do it. My co workers have been quite productive using MacOS and Windows. This probably depends on your language environment: if you're using a JetBrains ide for your inspections, I think it's not hard to be OS agnostic. But when I've used LSP servers, they've typically expected to run locally, and similarity helps.
I don't know why doing development in Linux is a unpopular opinion.
I've been using Linux on my workstation and laptop for the past 20 odd years and I very much prefer it to the MacOS environment on the company issued Macbook.
Doing development in Linux is not an unpopular opinion.
People who do develop in Linux telling everyone else to develop in Linux because not developing in Linux is Wrong and Bad, however, tends to be unpopular.
The fact that Docker runs in Hyperkit/virtualization is not really the culprit for the slowness. That makes it a bit slower than native docker but virtualization is pretty good these days. The real problem is filesystem access. If you keep your volumes inside docker, it's pretty fast. Alternatively, you can try using NFS:
Use :delegated in the volume mount to speed stuff up.
Alternatively, if you are running Docker Compose, you can use two mounts and a volume. I'm assuming your application is a standard Symfony/Composer application, checked out in $HOME/Projects/foo:
1) $HOME/Project:/var/www
2) some-volume:/var/www/vendor
Then, once the containers are created, run composer install inside the container, and then either run it again on the host or copy the files over using docker cp so that your IDE picks up the dependencies.
For more complex stuff such as Drupal where you also have to account for web/modules/contrib, web/themes/contrib and the likes, you'll need to do the volume trick once again for each Composer-installed folder.
The result will be that the tens of thousands of files that Composer installs and each request requires will be inside the Linux VM without any performance impact, whereas the dynamic files you are using for development will be instantly available in the VM.
Long term I'd really wish if Docker for Mac/Windows could use inotify, on-demand synchronization plus a boatload of caching to get the performance issues under control. Oh, and while we're at it, it really really sucks that "host.docker.internal" is only available on Macs - you need that one to get xdebug to connect back to the "host", so you need two separate Docker Compose files, one for Mac developers and one for Linux developers.
> for any serious development you'd want your environment to match live one
Not sure what you mean by "serious", but I get paid for doing webdev (Rails) on an ARM Mac and we deploy to x86 Linux boxes and our only environment is Gemfile.lock, yarn.lock and roughly the same DB version. I don't remember ever having problems with that.
I got paid for years doing webdev before I achieved anything that I would think of now as "serious development". I'm sure I'll look back in another 5 years and think the same about my current self.
I'm not meaning to say that you yourself are not a serious developer, nor that webdev is not "serious development", but if you do this long enough you will encounter a problem that happens in production that can't be reproduced in your dev environment.
Yes, it's a good idea to have your development environment mimic production as closely as is practical, even if it's not something that you currently practice.
My objection to that is that in the long run it will make the source code less portable and thus limit my options and also may increase the risk when doing future upgrades.
I prefer if my team develops on different operating systems or Linux distributions and point realeses, not only for early catching of weird behavior but also usually this will in the long run eliminate any environment assumptions in the source code and thus create a cleaner code base.
If it is necessary to replicate an exact production behavior I have staging for that.
>I got paid for years doing webdev before I achieved anything that I would think of now as "serious development". I'm sure I'll look back in another 5 years and think the same about my current self.
Maybe your definition of "serious development" is whatever is around your current level instead of something objective?
I'm not sure that there's such a thing as an objective definition of "serious development", and my response clearly puts a spotlight on the subjectivity of the term, does it not?
Package-manager solutions tend to have a similar effect.
The "non-serious" sort of development mentioned was more where you'd be dealing with a large number of tools without any builder-tool to ensure uniform versioning.
There are many reasons to avoid using Docker, besides the fact that it is a dumpster fire managed by a failing company.
The new M1 macbooks couldn't even run Docker, which was enough to get the last holdouts of my dev team to switch to just properly managing their local environments.
Docker can be useful, but more often than not it is a crutch used to make up for the fact that many web developers don't really know how to properly manage their machines.
Docker has worked on M1 Macs since... mid-December of last year. Less than a month after they were released? Why was anyone on your "dev team" using M1 Macs within weeks of launch? At that point, even installing things with Homebrew required compiling everything from scratch. Not a very great use of developer time.
> more often than not it is a crutch used to make up for the fact that many web developers don't really know how to properly manage their machines.
This is like saying that an electric starter is a crutch, and Real Drivers relish the opportunity to spend 20 minutes using a crank starter to "properly" start their car's engine every time they need to go somewhere. No thanks. Like using a crank starter, managing a bespoke local environment is not some incredible skillset, even though you seem to be giving yourself a nice pat on the back for it. It's simply a questionable use of developer time, and doubly so if all of your production environments aren't identical.
Why is "dev team" in quotes? Seems like an odd thing to get pedantic about.
We have a BYOD policy, and several members of my team opted to get the new M1. They were junior devs who didn't know better not be early adopters. I agree that it was a huge waste of time, and Docker was a major contributor to that.
Your analogy about starters doesn't really make sense. We aren't the driver, we are the mechanic, and we should know how the engine works in its entirety. Managing a local environment is a skill set, and frankly it is a skill set that many developers lack.
You can keep using docker if you want, I don't really give a shit. Its dead end tech that will be replaced in a few years by some new startup, while the *NIX systems using it will still be there and the developers reliant on docker will have to learn a new ephemeral skill set.
Why are you assuming devs who use Docker don't know how to manage a pretty simple LAMP setup?
It's easy, docker is a bit easier if you have multiple environments to deal with. If docker is gone in the next few years (lol) then we'll use the next one if it's easier too.
> We aren't the driver, we are the mechanic, and we should know how the engine works in its entirety.
I do agree with this though, which is why I once built a server from scratch (as in physical parts) and put a website on it to find what I was lacking in "full stack"* -> installed the OS -> install and configure nginx, php, mysql, mail, sftp, bit of Zabbix to monitor the whole thing -> built a website to go on it -> made it live. Just for a laugh. At the time the only trouble I had was CSS :)
But aye, no skillset if docker dies haha
* Which later turns out doesn't include any of the build/install/OS/monitoring apparently. Full stack my arse.
WSL 1 uses picoprocesses, an idea taken from Drawbridge project [0][1], so not quite the original POSIX personality from NT, whereas WSL 2 just runs Linux on Hyper-V as guest, with some additional drivers for interoperability with Windows.
One great thing about Gentoo is that you can have multiple versions of packages like PHP, so you can run legacy and the newest, at the same time, on the same system, without having to deal with docker.
How? Is it because you compile it yourself so it's easy to specify an alternative prefix? How easy is it to use the version you want? (e.g. working on two projects simultaneously - say fixing a bug on production while otherwise working on compatibility with the new version).
The compiled binaries have a suffix, and you are able to select which binary you want for the default. So, `php` is set to say, `php74` or `php8`.
The suffix also applies to the config files and init scripts.
As far as running two versions on the same webserver, you just have to set up the config for each version to be on different ports or sockets.
Also, to clarify a bit, this is all using the Gentoo package manager, so you do not have to manually compile the packages, or manually create the symlinks.
In fact, thanks to consulting across .NET, Java and C++, targeting mobile, desktop and Web, I have plenty of tools installed and configured for each project.
You're an endangered species. I find it pretty funny how so many developers jump through so many hoops to avoid developing on the OS that's actually running the application in production. Apparently wasting many gigabytes of RAM on unnecessary virtualization, suffering from extremely slow IO, and constantly bashing your head against file permissions/network configuration/whatever else is so much more productive.
(Please don't waste your time replying with "I can't use X because of Y". The world does not need any more of these anecdotes. We've heard it all. If you don't want to be a part of the change, downvote and move on.)
With the right tooling and a good understanding of your tooling you can easily get away without docker or the like. Don't get me wrong, personally I avoid installing as much as possible and prefer containers, but it's not necessary if you have the knowledge, and the tooling. Especially with a scripted language like PHP that has most plugins bundled in most distributions.
> jump through so many hoops to avoid developing on the OS that's actually running the application in production.
lolwut? Most of us have been using homebrew longer than docker has been popular, often codebases older than that, too. If anything, it takes far more hoop jumping for the average user to run a full 1:1 VM on OSX.
Yeah I do use PHP but I'd rather put things on myself than have a bunch of preinstalled stuff potentially needing security updates
Also considering Mac's still on like bash v0.0.4 or something I'd rather be able to install the latest version of stuff as and when I need it (or use docker anyway making the whole thing moot as you say)
off topic: WHY does mac's `rm` not support the -rf flag being at the end of the command dammit. I don't trust myself putting it up front haha
But then if you are doing serious development, you should also be comfortable with Homebrew. Being a Ruby developer myself, I never used the bundled interpreter! This news is not that relevant. Obviously these runtimes are included because some packages or libraries need them, but I'd not depend on them for development.
This kind of HN elitism bothers me. PHP built its reputation on quick 'n dirty hacks which were easy to deploy on cheap hosting not by aping the infrastructure of some tech giant with fleets of servers, which is what Docker was originally designed for. The average PHP dev relying on the version which comes with the OS is most certainly not using Docker. Most home broadband upload speeds make Docker a non-starter.
If you don't have the energy to substantiate your counter, please don't use the little energy you have to post this kind of comments : snarkyness creates shallow and uninteresting conversations.