> It could have worked, at least in theory, in a world where preserving API compatibility (in a broad sense) is much more common than it clearly is (or isn't) in this one.
Is he implying that using vendor packages is more clear that using version tags in a go.mod? That would certainly be the first I’ve ever heard of someone preferring dependency management via vendor packages over a go.mod & ‘go get’.
Personally I absolutely love the way go manages dependencies. I have way more confidence that my old go programs will run with minimal dependency related issues compared to programs written in something like python.
My badly communicated overall point is that I don't think it's right to say that Go started without any thought about dependency management. Instead, the Go developers had a theory for how it would work (with $GOPATH creating workspaces), but in practice their theory didn't work out (for various reasons). For me, this makes the evolution of Go modules much more interesting, because we can get a window into what didn't work.
(I'm the author of the linked-to entry. I wrote the entry because my impression is that a lot of modern Go programmers don't have this view of pre-module Go, and especially Go when you had to set $GOPATH and it was expected that you'd change it, instead of a default $HOME/go that you used almost all the time.)
It's certainly an interesting angle. It might have been more clear if the documentation and how-to's had explicitly used that sort of terminology: e.g., saying that step 1 of creating a project would be to create a new workspace directory for all the dependencies, then create a package directory (with a git tree) inside that.
There are certainly nice things that the current Go module system buys; but one thing I miss is that under the old system, if one of the packages wasn't working the way you expected, the code was right there already; all you had to do was to go to that directory and start editing it, because it was already cloned and ready to go. The current thing where you have to clone the repo locally, add a "go.work" to replace just that package locally to do experimentation, and then remove the "go.work" afterwards isn't terrible, but it's just that extra bit of annoying busy-work.
But being able to downgrade by simply changing the version in go.mod is certainly a big improvement, as is having the hashes to make supply chain attacks more difficult.
You can directly edit the downloaded code when using modules, you don’t need your go.work flow at all. I often do it while debugging particularly weird bugs or to better understand library code I’m using.
You can just edit the copy that ‘go get / go build’ download to your system. Afterwards undo the edits or re-download the module to wipe out your edits. No need to use go.work, local replaces, or anything. The files are on your disk!
From my experience the reason $GOPATH has pkg, bin src directory was they wanted to limit spread of affecting the filesystem, I could zip my go directory put it in another computer and still works as good as before, I hated Python because it couldn’t do this even pyenv have high chances of it not working correctly if changed to a different distro and most times I don’t wanna download 500M of anaconda everytime I swap.
There is a small asterisk about "more clear" in that if one vendors the deps, they are guaranteed to be available, and not subject to github outages, which seems to be 99.5% of all go deps in the world
It for sure is easier to reason about a declarative text file than 18 quadrillion .go files in a subdirectory, but using go.mod does come with tradeoffs
> Is he implying that using vendor packages is more clear that using version tags in a go.mod? That would certainly be the first I’ve ever heard of someone preferring dependency management via vendor packages over a go.mod & ‘go get’.
Isn't it the case that Debian builds of go 'packages' (i.e. making a .deb package') are using the pre modules mechanisms? If so, then they would seem to prefer it for some reason.
They prefer it for a number of reasons, for better or worse.
The big reasons I usually call out are:
- packages should be buildable without calling out to any network resources. (this is the primary reason Debian dh-make-golang disables 1.11 style gomod support)
- Debian specifically avoids vendored copies of code. This does create functionality and miantenance challenges, but it also produces a leaner installed code base. One again, for better or worse, this is a standing part of Debian package policy and warts from this particular choice stand out all over, not just golang. (newer Debian python, for example outright disables pip for non-venv python)
There are other nits and warts that become apparent, but I think those all waterfall out from the two points I've called out here.
The one issue I had with Go when poking at it years ago was that it seemed to have fairly strict opinions on directory structure. It felt like if your project wants to use Go it had better be about Go. It was immediately in conflict with the other code I had going on in the library and where on my disk I wanted to put things.
Not only that, but those extremely strict opinions on directory structure extend to your entire home directory.
As an example, I wanted to install "gopls", the language server for go. There was a command that could install the dev tools, but it also generated a $HOME/sdk. With such a generic name in my home directory, there's not a chance that in 6 months time, I'd remember what it was for, why it was there, and whether it is safe to delete. No amount of changing $GOROOT could convince it to write into a different directory, nor was the go language server available through any other installer.
That was my next step, but I had already spent an afternoon on it, so I ended up just making do without and using grep.
(Partly because docker has it's own set of obnoxious quirks, as it assumes everything can and should be run as root. And rootless docker cannot coexist on a system with over-permissioned docker. I really need to switch over to podman at some point.)
This has been fixed with the newer modules system. You do have to name the folders the same as your packages but you can put the folders anywhere you want now.
One big disadvantage of using the domain name in the module specification is, that as a package provider, you are kind of permanentely chained to the hosting provider of your choice, e.g. github.com. Moving your package would destroy its "identity".
It is bad enough how much of a monopoly and tie-in github today has (probably the reason Microsoft bought it) and a language environment shouldn't contribute or even aplify that role.
The alternative (as with e.g. npm) is being tied to the hosting provider for the package ecosystem. If you want to, you can use your own domain name for your Go package URL and then link it to a repo hosted elsewhere. There's really no namespace on the internet that you have a more permanent claim on than a domain that you yourself have registered.
In the case of npm you could also use something like Sonatype Nexus, push your packages to your own npm repository and install them with --registry or something like that: https://stackoverflow.com/a/35624145
I also ran GitLab in the past: https://about.gitlab.com/ but keeping it updated and giving it enough resources for it to be happy was troublesome.
There's also GitBucket: https://gitbucket.github.io/ and some other platforms, though those tend to be a little bit more niche.
Either way, there's lots of nice options out there, albeit I'd still have to admit that just using GitHub or cloud GitLab version would be easier for most folks. Convenience and all.
Not at all. If you just reference to a package by its name, that is good enough for a compiler. I think automatic download of packages from third party sites doesn't belong into a language and its native tooling. The downloading and installing of third party packages could be done by a tool provided together with the language core tools, but should not be required to be able to compile a project at all.
Strictly speaking, as with cargo/rustc, the Go module system and Go compiler are separate, and the latter can run without calling the former. There are various flags to tell the "go" command (which is a frontend to other tools) how to behave when modules are involved, e.g. so that it works well on airgapped networks and locked-down intranets or can be used safely with untrusted source. You can also still vendor modules to ensure they are locally available.
That's why there's https://go.dev/ref/mod#vcs-find .
Your import path doesn't have to map directly to vcs repos, as long as you can serve an html meta tag to redirect it to where your current repo is:
But you can't serve this meta tag if you don't control the URL/domain anymore, and you can't force all of your users to use e.g. an intercepting HTTP proxy for such requests.
How exactly? If e.g. the registrar seizes the domain name that I've originally used and gives it to someone else, what, exactly, are my plans supposed to be for that?
Thanks for the link. When I started learning Go modules that spec was not available (or it was a lot smaller). After a brief look I feel the Go modules spec is longer and more complex than the (original) Go language spec which is quite ironic :-D. I still like Go although it's sad seeing it straying from the 'one true' path of simplicity.
It's a bit shorter than the current spec, but it does cover more: how it works conceptually, how the default toolchain works, and how it interacts with a lot of things outside of a world it creates on its own (which is what a language is).
The benefit is that you're not creating a new directory of names, they reuse an existing well established one.
Creating a central authority for names is a lot of work.
Im not that experienced with Go, but I believe it's possible to create a vanity package name on a domain you control. If you want to change hosts, you can just point your domain to something else.
Yes, if your module URL is on a site you control, you can serve a <meta name="go-import" ...> tag to redirect it to your source repo. The module URL is permanent but you can move the repo.
It's a bit more fiddly if your module is part of a monorepo and doesn't live at the root. In that case your go-import tag needs to point to a GOPROXY server. I have a proxy server here: https://github.com/more-please/more-stuff/tree/main/gosub
Indeed not creating a new directory of names is an advantage, plus it also means all the short names are effectively reserved for the standard library, and there's no scramble from developers to squat on all the "good" names.
I don’t write go myself but man did they get a lot of big decisions right. I’d be totally open to writing go in future but I have java so don’t really need it. I envy go’s build and deploy stories though.
The obvious advantage of using domain names and in general URLs as the package names is that the Go project doesn't have to run a registry for package names. Running a registry is both a technical and especially a political challenge, as you must deal with contention over names, people trying to recover access to their names, and so on. By using URLs, the Go project was able to avoid touching all of those issues; instead they're the problem of domain name registries, code hosting providers, and so on.
I'm a simple man, I experiment a lot, I create a module locally and bam, straight from the beginning I have to decide where I will host this module and very often I don't want to make it public so I only keep it locally on my machine, but often I need to share my module among my several machines (laptop, mini desktop) but I still don't want to share with github publicly and it's annoying that there is no easy way to do this AFAIK. In a better world I would create a "module" in a folder, give it a symbolic name at most (like 'ShinyModule') and share it in various ways; like I could share the folder using samba, ftp, ftps, sftp, https and in the consuming side, you would just import 'ShinyModule' and have a single file per consuming module which says:
And you can do all these things smoothly with rust's cargo: use a local relative path, use a git URL, or use a published package name. It's perfect if you want to try and hack around a dependency.
It's not because the tooling is better, which also happens to be true by far, but because they didn't tie themselves down to a domain name scheme. Funny, given that go waited a long time to take a shot.
Rust has a different problem: too many dead packages with desirable names on crates.io. There's a lot of derelict cruft in that shared namespace, especially for packages outside those most commonly used.
The company I am currently at has changed the domain name for the inner Gitlab server three times in two years. The older domain's are tentatively supported but they behave differently in respect to authorization so... we had to switch our imports whole-sale, or the builds would just keep breaking for no apparent reason.
Go’s initial development priorities were ordered for Google’s needs. The described problems weren’t major issues at Google because they store or at least used to store all of their code in a single repo.
Not only that, but Google doesn't use language-provided build systems anyway because it's important (to enable many other useful dev tools and practices) that everything is built using blaze (the thing externally released as bazel).
So really, Go's decisions about modules, vendoring, source layout, and the "go" command itself are all kind of irrelevant within Google. (The underlying Go compiler is used of course)
Golang's approach in the days of GOPATH fit very well with its native environment, which was Plan 9, where mounting remote sources as /n/... wasn't unusual.
> The whole thing is fairly similar to a materialized Python virtual environment.
The feeling I got about GOPATH, back when I first encountered it, was not that it should be used like a Python virtual environment, where you'd have a separate one for each of your projects. What I understood back then was that you were supposed to set up GOPATH only once for your user, in your shell's startup scripts, and all of your projects would have to live within that directory, in the rigid structure the Go tools dictated. The default for GOPATH added later was just to avoid confusing newbies who aren't used to adding environment variables to their shell startup scripts.
As someone who used Go since before v1, I can say that Go's approach to dependency management was basically "we don't want to touch it until we know we have the right idea". At the time, dependency management solutions were pretty primitive to what we have today...there was no caching, and if the central server where you got your dependencies was down, you were shit out of luck. So this was really not a bad decision for the language maintainers if they wanted to keep their sanity and not subject their user base to "legacy on arrival" software. By depending on URLs and using a `$GOPATH`, this problem was solved from a localized perspective...as a developer, I could vendor in packages, not check them into Git, and compile the program. It worked.
However, the biggest problem with depending on packages in Go back then was that it was difficult to communicate to other people on your project the exact version of the dependency that you wanted them to install. For projects in which you were the only developer, this wasn't an issue...but as soon as you started to use Go in a team setting, it became a real blocker toward getting your work done. Go developers needed _some_ way to communicate what version of each package they wanted to install, and a bunch of solutions popped up to help with that. But they were all still bound by the `$GOPATH` constraint and such.
Although it took a lot longer than many predicted, I'm still pretty happy with how Go approached dependency management at the end of the day. Generally, all I have to do is import a dependency from a known URL and my editor/Go modules will take care of the installation work. This is way better than the JS world, in which if I want to just sketch something out I have to actually install the dependencies I need otherwise TypeScript will yell at me. With Go, it all seems to happen automatically.
- If you want to make a change in one of your dependencies, you have to fork the repository, and in the forked repository you have to textually rename the package to the new name. This makes for abysmal maintenance and unneeded merge conflicts if you want to maintain parity with upstream code.
- There is the go replace directive, but this does not transitively apply to dependencies of the module that declares the go replace directive.
- If your patch gets into the upstream repository, now you have to undo the forking (again via a large textual rename).
- If a dependency of your code and your code both depend on the same package, you are forced to take one version per binary that gets compiled. This is just plain absurd and leads to situations where you cannot bump the dependencies independently. If you have a tree of these dependencies, you must update each dependency in the order that respects the dependency tree. This sort of defeats the purpose of specifying and locking the dependency version.
- Overall, go was designed for a mono-repo in a company (Google) that does not version their software (everything runs at tip), and it shows in any type of effort that attempts to re-use software in non-trivial fashion, with distributed development that happens at different rates in different repositories.
The last point is simply not true, notably Google does not use the go build or dependency tooling, instead using blaze. Blaze handles the aforementioned issues as well.
His last sentence:
> It could have worked, at least in theory, in a world where preserving API compatibility (in a broad sense) is much more common than it clearly is (or isn't) in this one.
Is he implying that using vendor packages is more clear that using version tags in a go.mod? That would certainly be the first I’ve ever heard of someone preferring dependency management via vendor packages over a go.mod & ‘go get’.
Personally I absolutely love the way go manages dependencies. I have way more confidence that my old go programs will run with minimal dependency related issues compared to programs written in something like python.