Hacker News new | past | comments | ask | show | jobs | submit login

My badly communicated overall point is that I don't think it's right to say that Go started without any thought about dependency management. Instead, the Go developers had a theory for how it would work (with $GOPATH creating workspaces), but in practice their theory didn't work out (for various reasons). For me, this makes the evolution of Go modules much more interesting, because we can get a window into what didn't work.

(I'm the author of the linked-to entry. I wrote the entry because my impression is that a lot of modern Go programmers don't have this view of pre-module Go, and especially Go when you had to set $GOPATH and it was expected that you'd change it, instead of a default $HOME/go that you used almost all the time.)




It's certainly an interesting angle. It might have been more clear if the documentation and how-to's had explicitly used that sort of terminology: e.g., saying that step 1 of creating a project would be to create a new workspace directory for all the dependencies, then create a package directory (with a git tree) inside that.

There are certainly nice things that the current Go module system buys; but one thing I miss is that under the old system, if one of the packages wasn't working the way you expected, the code was right there already; all you had to do was to go to that directory and start editing it, because it was already cloned and ready to go. The current thing where you have to clone the repo locally, add a "go.work" to replace just that package locally to do experimentation, and then remove the "go.work" afterwards isn't terrible, but it's just that extra bit of annoying busy-work.

But being able to downgrade by simply changing the version in go.mod is certainly a big improvement, as is having the hashes to make supply chain attacks more difficult.


You can directly edit the downloaded code when using modules, you don’t need your go.work flow at all. I often do it while debugging particularly weird bugs or to better understand library code I’m using.


One way or another, the current workflow has to be:

1. Download the code (or update the copy you downloaded last time you had an issue)

2. Redirect `go build` to use the downloaded copy rather than the upstream copy (either by modifying go.mod or go.work)

3. Do the editing

4. Undo #2

Under the old system, you just had to do #3. And sure, it's not that much work, but it's a bit more than I had to do before.


You can just edit the copy that ‘go get / go build’ download to your system. Afterwards undo the edits or re-download the module to wipe out your edits. No need to use go.work, local replaces, or anything. The files are on your disk!


Its much better to do go.work and add to your ignore. This way you won’t have to muck with go.mod and commit the file with local replaces


> but in practice their theory didn't work out (for various reasons)

For me, one of the reasons I disliked $GOPATH is that source code was not in $GOPATH; it was in `$GOPATH/src`.

The fact that they are very strongly against relative imports also didn't help, even if I get why they do it.


From my experience the reason $GOPATH has pkg, bin src directory was they wanted to limit spread of affecting the filesystem, I could zip my go directory put it in another computer and still works as good as before, I hated Python because it couldn’t do this even pyenv have high chances of it not working correctly if changed to a different distro and most times I don’t wanna download 500M of anaconda everytime I swap.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: