It's distasteful of Go to impose a filesystem layout.
Any other language that I work with understands that dependencies should be self-contained within the project folder. Each project has it's own set of dependencies that have been tested to work together. The user can select where to checkout the project, or even have multiple copies of the same project lying around.
To achieve the same thing with Go, one has to set a different GOPATH per project, and then checkout the project deep into that root. This is not convenient as a developer and as a software packager. Or give in and having to resolve dependencies that work with all the current projects. Given that Go doesn't really do package versioning, finding the right set of commits that all work together is a exponential nightmare.
Now that since Go 1.6 each project can have its own vendor/ folder, GOPATH shouldn't be required. I should be able to checkout the project where I want and then have the tools look into the vendor/ folder to resolve any dependencies. Please change it that way. The reason govendor and all these tools are complicated is because of this GOPATH madness. Synching dependencies between GOPATH and vendor/ should only be a problem for Google, not the rest of us.
It's pretty easy to declare a GOPATH that's local to your project. I've been quietly doing it for years. The linked repo is just one short shell script that helps do it (on any project; you don't even have to commit the script, and it doesn't change how anyone else develops).
Frankly, I think every highly-productive gopher outside the Google offices uses some form or another of this. It's unpopular to say it and goes against "the community" zeitgeist, but many people will admit it in private if you ask around.
My biggest complaint about the current state of Go dependency management is actually the vendor dir. We never needed it; project-local GOPATH is enough in the first place. Now, we just get the headaches when both exist and are in conflict, or one provides something that should've been in the other (not a problem, until you push, and someone else on the team has to tell you that you didn't vendor enough...)
I made[1] a zsh hook some time ago that allowed you to have per-project GOPATH.
At the root of the project I just have to do `echo 'github.com/username/package' > .gopkg` and after that, for everything I do under that directory tree, GOPATH will be automatically set to something like `export GOPATH="$(dirname .gopkg)/.gopath"`.
It's a blessing when you want to start a new project while offline (e.g. on a plane or train). All your usual dependencies are already there, ready to use.
Not that it justifies any pain, just found it to be a pleasant side-effect.
Sometimes it's worthwhile to just give in and try something new, even if you don't like it. You might discover unexpected advantages. Then go back to what you prefer and apply those lessons there.
Everything would be much better if we all kept an open mind and learned from each-others experiences. It might not always be obvious, but things are usually a certain way for a good reason. Might not be the best or right reason in the long run, but there's always some lesson to be learned.
That’s, like, the opposite of npm though. It does dependencies per project, versioning, and does not impose any filesystem restrictions. Not that it doesn’t have issues.
> Go dependencies are a little odd the first time you run into them. I suspect this is because Google runs a mono-repo and as such the decisions around it were made with that in mind.
I think the oddness is around the directory structure. It may also be because Go was developed by UNIX old hands. When UNIX was designed, it was not just for ordinary users, but also developers. When developing in UNIX, you can use all of UNIX as an IDE ( grep, find etc ). Most Unices have a folder called /usr/src [1] where sources are stored. When you want to build a package, you cd into it ( say 'cd /usr/src/make') and then say 'make && make install' and it builds and installs the packages to your /usr/bin.
I am assuming, Google's internal monorepo, which is derived from Perforce [2] encourages you to keep the /usr/src under source control. Since Piper is not released to the public and most people now use git, GOPATH was probably conceived as a way to relocate '/usr/src/' to another location. This is probably why 'go install' ( which probably mimics 'make install' ) installs everything at 'GOPATH/bin'. If this is true, I think we have a mental model on Go's directory structure.
To be fair, go code at Google doesn't use the standard directory structure. I had all my go code in one directory and which files were part of which package was all defined in a build file.
What exists in the public world of go was simply what the go team wanted for go and not a compromise to fit in with other systems at Google. I think the fact that you end up with what looks like a monolithic repository is just a coincidence.
In the end ~/go doesn't look a lot different than the @INC path that points to something in your home directory, or /usr/include, /usr/lib, and /usr/bin in a pure C system. It's just the fact that you also put your own code in the same directory as third-party packages that confuses people... but in the end, someone else will consider your package a third-party package someday... so I don't think there's much value in treating it any other way.
> In the end ~/go doesn't look a lot different than the @INC path that points to something in your home directory, or /usr/include, /usr/lib, and /usr/bin in a pure C system.
Well, yes and no, and that's the only thing I don't like about GOPATH: that your source code is not in GOPATH, but in `$GOPATH/src`.
I'm not against GOPATH per-se because I'm already used to other PATHs; I'm just against it not behaving like those other common PATH variables, like PATH, LD_LIBRARY_PATH, PYTHONPATH[1], etc. GOPATH introduces intermediary directories (src, pkg, bin), instead of going straight to the point and being only for source code.
I mean, there's `GOPATH` and also `GOBIN` which is just `$GOPATH/bin`. IMO it would make much more sense for GOPATH to be only source code, GOBIN to be only executables, and something like GOPKG for the current `$GOPATH/pkg`.
[1]: Yes I know venv is a thing, I'm just mentioning PYTHONPATH to illustrate my point.
Fully agree. Having the option to split these three domains would make it a lot easier to have basically any directory structure for your projects and how it can be distibuted.
> Most Unices have a folder called /usr/src [1] where sources are stored. When you want to build a package, you cd into it ( say 'cd /usr/src/make') and then say 'make && make install' and it builds and installs the packages to your /usr/bin.
That more or less remains how BSD ports systems work.
The default Go Docker images include everything needed to compile Go. But once you have a binary, you really only need an image capable of running your binary. I've seen images sizes reduced by over 95%.
Revising the example in the article:
FROM golang:1.10
COPY . /go/src/bitbucket.code.company-name.com.au/scm/code
WORKDIR /go/src/bitbucket.code.company-name.com.au/scm/code/
RUN CGO_ENABLED=0 go build main.go
FROM alpine:3.7
RUN apk add --no-cache ca-certificates
COPY --from=0 /go/src/bitbucket.code.company-name.com.au/scm/code/main .
CMD ["./main"]
The `CGO_ENABLED=0` and `apk` are oddities of Alpine Linux specifically. The image you choose to run your binary may not need these.
In my use case: container scheduling / orchestration. Tools like mesos/k8s/docker swarm. I don't "copy binaries over to a host" - it's rare that I would even have ssh access to a host.
Those tools aren't optimized for running arbitrary binaries - hence containers.
There's definitely use cases for Docker with Go that mostly revolve around running untrusted code or wanting to be able to set tight restrictions on CPU usage, memory etc. Though you could also just do the latter via cgroups. Another use case might be if you have extra files that need to be deployed with the binary as well.
I wrote up some documentation about this for my use case just in case I ever have to go back and change it.
You havent refuted my point. People take a stance that docker can run untrusted code without actually looking at what they have to do for that to be true.
I haven't taken the stance that all docker containers can run untrusted code, but I've certainly done my best to harden my docker containers as much as possible. If you took a look at that write up I cover it.
I run with the flags: --net=none --cap-drop=all --cpus=1 --read-only --tmpfs=/tmp:rw,size=1g,mode=1777,noexec
This gives it no way to communicate with the outside world and drops all capabilities so it's not allowed to interact with the kernel at all. Setting CPU limit to be 1 also prevents DOS attacks internally.
I also run the process under the nobody user which entirely avoids the "container root is system root" issue.
I'm also only sort of running "untrusted" code. I'm running tensorflow models which can do arbitrary computation but are more secure than just running raw code.
Using the `golang:1.10-alpine` image to run your binary would defeat the purpose of a multi-stage build, since it includes everything needed to compile Go and weighs in around 376MB. The `alpine:3.7` image is about 4MB.
edit
I misunderstood the parent, they meant to use `golang:1.10-alpine` as the build image, not the run image, as in:
FROM golang:1.10-alpine
COPY . /go/src/bitbucket.code.company-name.com.au/scm/code
WORKDIR /go/src/bitbucket.code.company-name.com.au/scm/code/
RUN go build main.go
FROM alpine:3.7
RUN apk add --no-cache ca-certificates
COPY --from=0 /go/src/bitbucket.code.company-name.com.au/scm/code/main .
CMD ["./main"]
It feels like a slight failure that things like this are necessary, especially for a new language.
As a Python developer, I am no stranger to slightly ridiculous packaging / environment woes. However Go has 19 years on Python. I understand that they are different languages and ostensibly serve different purposes, but it feels like a shortcoming to me - based on the little Go experience I've had.
> As a Python developer, I am no stranger to slightly ridiculous packaging / environment woes.
Speaking of which, what is the best way to start a python project in 2018, now? Python's version and dependency management is my least favorite aspect of the language.
> Speaking of which, what is the best way to start a python project in 2018
Pipenv[1] and flit[2]. Together, these do most of what npm/yarn provide for a Node.js project.
Pipenv for installing packages. It’s a virtual environment manager (like pyenv, virtualenv, venv) that’s aware of and stays in sync with the package requirements list, and supports repeatable builds. It claims to be “officially recommended”, and Travis and Heroku, for example, detect its configuration file.
Flit for publishing a package. It’s driven from a declarative project file.[3]
It's still a bit new, but I think pipenv [1] is the new standard now. No need to directly work with pip or virtualenv anymore, and the equivalent requirements file works more like npm/yarn.
Nearly all my work at this time is in C++, Rust, Python, and JavaScript. Thanks to pipenv, Python environments finally feel on par with Rust/cargo and NodeJS/Yarn.
C++, though, still a mess to get an isolated build environment there. Meson and Bazel are close, so close...
With all due respect to the Python community, and without intending this as an insult per se, if this really is the "current recommendation", the Python community is in no position to be criticizing the Go community on this point. This is about the fourth answer I've heard in the past 10 years. I've been programming off and on in Python for at least 15 years and I've never heard of this tool.
Out of morbid curiosity, what's the current answer for "how to pack up a Python program into a single executable/directory for Windows" now?
> Out of morbid curiosity, what's the current answer for "how to pack up a Python program into a single executable/directory for Windows" now?
The Hitchhiker's Guide to Python[1] (HHGTP) encapsulates a lot of standard tools and best practices. I've been recommending it to students for a few years now. Last year it listed virtualenv, venv, pyenv, etc.; now it recommends just pipenv.
HHGTP also has sections on "Packaging Your Code" and "Freezing Your Code". I think what you're asking about is referred to there as "freezing". The "Freezing" section of HHGTP lists some Windows tools, and contains a table comparing them. There doesn't look to be a single accepted one.
A G^nP post did criticize Go compared to Python, but I don't believe the author speaks for “the Python community”.
Am I meant to justify all Python-related decisions because I choose to work with it? Does my using Python (or even my membership in 'The Python Community') disqualify me from critiquing on other languages and development environments?
What does your comment add to the discussion other than snark and aggression?
A correction to people claiming that Python's obviously got a better solution to dependency management than Go, a solution so "obvious" that a Python programmer of 15+ years hasn't heard of it yet. (No, I'm not actively programming in Python right now, but if it's so obvious and concrete I should still have known about it. I was up-to-date on best practices as of a couple of years ago!)
You aren't obligated to do anything; the question is open to anybody.
I don’t like a lot of Go features but the GOPATH system is really quite good (until you care about pinning versions, when it becomes not worse than Python).
GOPATH is going away with vgo so it wasn't really good at all as it forced the developer to adopt a specific folder organization. The fact that go maintainers acknowledged this issue is a sign they are ready to move the language forward.
> forced the developer to adopt a specific folder organization
Well, so does C (and basically everything else). Alternatively, you can pass tons of -I and -L flags to your compiler calls but you can do that with Go, too.
The point of GOPATH was to make this unnecessary because everything can be deduced from the source code and its location within the file system.
I mean, the C toolchain has plenty of baked-in paths where it expects to find something. If project foo depends on project bar, project foo is not automatically going to look at ../bar when building. You either set that up manually, or you put everything in /usr/include and /usr/lib.
"Set that up manually" usually consists of an m4 script that takes 15 minutes to run as it tries to look for all the dependencies ("./configure"). It's flexible, but it's not great.
Irrelevant, as I didn't say the Go team should build a package manager, just that Go lacks one.
And I mean a dominant one -- there's a few attempts for Golang. Default doesn't have to mean "core-team built". All the package managers I've mentioned are de-facto standards for their respective languages.
Couple of quick notes from my phone, as a googler gopher:
- IIRC package management was historically left to the community to solve for, rather than the go maintainers mandating how package mgmt should be done. Many languages have followed this model. I've never heard of it having to do anything with our mono repo model. The community never ended up standardizing on a model; since then, the go team has endorsed godep as the official experimental package manager, whose learnings were used to design vgo. Vgo is very new, and still being iterated on; check it out though!
- vgo removes the need for GOPATH
- There is a one-line installer to set up your env. It's very useful and simplifies a lot of what this blog talks about.
Sorry I don't have links - on phone at airport! All the above should be easily googleable though! :)
If I may ask, how do you handle the use of other languages within a project that uses Go? I am new to Go, so it might be a silly question.
I currently have a project that uses Node and React for the front-end and Go for streaming and scraping large .xml + .json data due to its great standard library.
With Node I can start a project wherever I would like — e.g. D:/work/node_project. Go seems to be much more opinionated on that? I opted to follow Go’s preferred directory structure and include npm related packages there, but feel I am not following best practices. Ideally, I would like to have one directory for my projects.
I will take a good look at vgo this afternoon. It removing the need for GOPATH sounds interesting. Thank you.
> For Windows it allows you to share the directory between the WSL and Windows
I've avoided changing the GOPATH by using the following set up:
C:/Users/<user>/go
(Standard windows GOPATH)
Then symlink in WSL:
ln -s /mnt/c/Users/<user>/go ~/go
Works pretty well. Then you can install Go on Windows if you need it, otherwise your dev environment is isolated and your Go code remains on Windows if you remove/refresh WSL. Also if you use file history, the code will be backed up this way in your Windows home as well.
The biggest blocker in getting me to give Go a shot is it forcing a certain directory structure on me, and the lack of Gemfile or package.json equivalent.
Has this been solved? I really want to use Go because of everything I've heard, but I can't stand how $GOPATH forces a directory structure on me.
It bugs you for about 20 minutes as you question why this isn't like other languages you used before, and then you get over it. If that is your only stumbling block to learning a new language, I think you can overcome it pretty easily.
>The biggest blocker in getting me to give Go a shot is it forcing a certain directory structure on me, and the lack of Gemfile or package.json equivalent.
The second is a problem. The first is bike-shedding (and Go doesn't really force anything -- you could just use e.g. a Makefile, and have your Go files wherever you want. You only need to follow the directory structure convention if you want go build, go install etc to work out of the box).
The biggest pain with Go is the dependency management. I initially started to commit the whole vendor folder for each project, but lately I only commit the Gopkg.lock and Gopkg.toml
Both those approaches bothered me over time and I'm still undecided about what's the best way to do this. I know some projects only commit their Gopkg.toml (and not lock) with some explicit dependencies.
> I initially started to commit the whole vendor folder for each project, but lately I only commit the Gopkg.lock and Gopkg.toml
This really bothers me too; I appreciate Go's liberal application of convention over configuration, but I feel like this part of dep is a break from the simplicity afforded by the aforementioned principle. I often appreciate this in other Go tooling (e.g. gofmt), and I would love if they just made everyone do dep one way (I don't care which.)
I'm a big fan of committing vendored dependencies. I sleep better knowing that I can checkout any commit and expect it to build. But that only works for main packages, library packages are still a sore point.
This is not a "Go thing" though. The only difference is that in most go projects vendoring the whole folder is actually quite easy.
I've just started to check in vendor/ for my current project. It's nice to know that all the code is "locked" in git and that a git clone is all you need to get all the dependencies. OTHE, it creates more work and force you to take a more active role in dealing with dependencies. But it gives you more control.
One you set it up, document exactly that setup in readme. If you use something specific for dep management, document it. If not, document it. Document how to run tests. Document everything...
At work I often run into issues with "random project X on GitHub" and do a bit of drive-by fixes. But many times when I find some Go project, I have no idea how to get from a fresh repo to running tests. And if that requires a lot of time to figure out, you're getting a vague issue that I may not want to spend time on, rather than a ready PR. Unfortunately this happens for go projects much more than other languages in my experience.
I'd say this is true for all projects tbf; set up a readme, type what you need to know and how to run the project in dev mode. Shouldn't take more than five minutes to type out, or copy / pasted from another project.
"It’s a good thing that Go is such a good language, because getting started with it is downright PAINFUL compared to Ruby or Python."
I'm not sure that's particularly true. Getting started is easy: 1. Install go. 2. mkdir -p ~/go/src/yourproject 3. Start writing Go in yourproject. go build will build your code. go install will put your executable in ~/go/bin. When you need some code from github, run "go get" and don't worry about where it goes. You're just starting out.
Compare getting started with Python: 1. Install python. 2. Create your project directory anywhere you like. 3. Start writing python. Python will run your code. pip will install libraries if you ask for them. Don't worry about where they are going, you're just starting out.
Other than "choice of directory" that's not very different.
Industrial-strength dependency management is harder than that, but you can retrofit it on later quite easily in Go. Both dep and vgo (from personal experience, I assume others too) can examine a project that is a mess of hand-vendored directories and references to the general package space and extract out a manifest file for you, which is probably exactly what you want, since it will reflect the current project.
It may actually even be easier than Python here, since Go source can be examined and the exact packages you are using confidently and accurately plucked out; I don't recall if pip has a "just examine this directory and freeze its exact dependencies" command and blew out my "HN comment time budget" trying to google the answer. In Go, you can just cruise along for a long time, using dozens of packages, and drop dep or vgo in at the last minute and get functional dependency management on your existing project in one command. I've converted two projects that ended up using over 30 packages (at least a middlin' size in the Go world) each to dep in the ~5 minutes it took "dep init". (It checks things out fresh, so that's mostly source code retrieval.)
>It’s a good thing that Go is such a good language
Well, not that good. Decent would be the best one could say. It has many special cases and warts, and lacks a few very standard features. But let's not get into that here.
>because getting started with it is downright PAINFUL compared to Ruby or Python
If you mean regarding package management, maybe. In other terms, not really painful at all. It's one of the easiest languages to just download, code, and build your code -- you can even cross-compile with like 2 extra ENV settings.
It's so painful to get started that I'm not at all convinced the okay-ish language experience is worth it. I feel like Go was on the verge of something brilliant but managed to miss it by a couple of significant areas.
`dep` is actually a good tool and suppose to be the official tool. But out of no reason, `vgo` came out and said to be deprecating `dep`. I don't really understand how the Go team make decisions.
Go in general seems to be powered by a general principle of "Was it invented here?", which undoubtedly stems from its plan9 roots as an upstart reinvention of Unix. The software community tends to treat the "not invented here" phenomenon as detrimental but I think in the case of Go, the willingness of the team to reject the status quo has been a significant contributor to its success and utility.
But for all the good this attitude brings (and I would definitely regard it as a net benefit) there have been some pretty huge missteps too, and the Go team's prioritisation of things like type aliases or the half-finished plugin architecture over those missteps leads many to ask reasonable questions about how those priorities are decided.
GOPATH is a long-acknowledged mistake and a lot of community effort and consensus-building work has gone into finding a solution. vgo was a bit of an ambush and has left a lot of people confused and bewildered, especially after a huge amount of community goodwill and momentum was built up around dep. That was a price Russ Cox decided to pay with vgo, which was his prerogative. Now we just need to wait and see whether it was worth it, which will take some time.
Can't `dep` be easily redesigned so it sits atop the `vgo` architecture? I don't know enough about `dep` to say for certain, but it seems to add extra functionality to what `vgo` will provide, and could pivot into a still-relevant extension to `go`.
I think this article is a little bit shortsighted. You should probably be studying vgo in 2018 instead of using the old and soon-to-be deprecated way. Of course, this being golang, I'm about to get a bunch of replies from people who think they know better suggesting that vgo is terrible and describe their stockholm-syndrome-like relationship with GOPATH.
vgo is the only reason I am willing to give Go the time of day in 2018.
> To search for anything about Go in your search engine of choice use the word golang rather than go when searching. For example to search for how to open a file I would search for golang open file.
This seems to be a recurring chestnut, but I would have expected the author to actually test it. If I search for [go open file], 8 of the first 10 results on Google show useful information.
The overall approach to folder structure can be a personal thing, and doesn't matter too much what you settle on. My own preference is for something like:
In this setup, the router sub-package might import handler/get, handler/put, and so on, returning a router, which can typically be passed as an http.ServeMux to a Start or Run func in the server sub-package. main() winds up doing the bulk of any initialization sub-packages need, but not much else.
At various points you'll probably find that sub-packages become general enough that they can be easily broken out into their own repos and maintained separately. This can wind up being a good goal to shoot for when designing the APIs for each sub-package.
But wouldn't be better to organize it in domains, together with clean architecture? Because if the application have too many domains, it can get a bit messy, doesn't it? (Although if a micro-service strategy is used this shouldn't be a problem)
I am always interested in how to organize the folder architecture in a way that, in the future, it will not get in front of the development team.
Example: MVC was one of the go-to choices some years ago with folders /models, /view and /controllers, where everything was put in it. But, nowadays we know that it can get messy, specially when there are too many domains. Besides that, MVC isn't appropriated for the REST world (the V is not suited for that).
I tried to get into rustlang and golang but both of the languages are so opinionated about how I should structure my workspace, how I should manage dependencies that I gave up. I'm used to cmake and C/C++ where I can do whatever I lik, however I like. I have to give go a try again after reading TFA.
Any other language that I work with understands that dependencies should be self-contained within the project folder. Each project has it's own set of dependencies that have been tested to work together. The user can select where to checkout the project, or even have multiple copies of the same project lying around.
To achieve the same thing with Go, one has to set a different GOPATH per project, and then checkout the project deep into that root. This is not convenient as a developer and as a software packager. Or give in and having to resolve dependencies that work with all the current projects. Given that Go doesn't really do package versioning, finding the right set of commits that all work together is a exponential nightmare.
Now that since Go 1.6 each project can have its own vendor/ folder, GOPATH shouldn't be required. I should be able to checkout the project where I want and then have the tools look into the vendor/ folder to resolve any dependencies. Please change it that way. The reason govendor and all these tools are complicated is because of this GOPATH madness. Synching dependencies between GOPATH and vendor/ should only be a problem for Google, not the rest of us.