I never quite understood how Docker lets developers share the same development environment. Most Dockerfiles that I have seen are a series of apt-get install commands. If different people build images using the same Dockerfile at different times, isn't there a chance that they will pick up different package versions? What am I missing?
Create a dockerfile that does performs installation of all the tools that you need, execute that once to create an image and then share the image with everyone else who needs it, possibly through a private registry.
We use that approach now to store some build environments for embedded systems, where our prebuilt and shared images contain all 3rd party dependencies (which are only slowly changing). We use then those images to build our own software. Depending on the use case we create new images from it, or only spawn containers for compiling something, copy the artifacts outside of the container and remove them again. Works really well for us.
I build base images and tag them with a hash of the packages installed in them (this is quite easy using Alpine Linux, I use sha1sum /lib/apk/db/installed), and then explicitly use those. If a package is upgraded or a new package installed in the base image then the image tag is updated.
People just don't care about those small differences. (Since solutions of pedantic version pinning or vendoring are known, anyone who doesn't adopt them clearly doesn't care that much.)
Or you can have all the devs pull images built by a central CI system, but you'll still have package differences creep in over time.