Another reason Go has been helpful is that is generates a single executable that can be distributed to our clients. There's no complex dependency chain or layout of shared libraries to worry about.
This is such a subtle yet profound feature. Think about it: no more dependency nightmare during deployment!
Static linking is easily available with C/C++ as well. The main trade-off is how critical bugs in some library your program uses can be solved.
With dynamic linking it's possible for the library author to publish an API-compatible update, which when deployed fixes the issue in all programs that use this library. With static linking you need to recompile the whole program and publish the update.
For real world examples you could look at the Microsoft C run-time. When a security vulnerability is found & fixed, Microsoft pushes the dynamic link version of the update via Microsoft Update and millions of programs are no longer affected by this issue. However when you linked statically, you need to update your program as well. Many programs do not have good automatic update systems, and many programs aren't even under continued maintenance and remain forever vulnerable.
I wonder if there's a way to have your cake and eat it too.
Publish your app as a dynamically linked binary to an app publisher, the publisher takes your dll and mashes it with its dependencies into a single dll that they deliver to the customer.
When there is a security update in a dependent dll, a new binary can be delivered by the publisher to the customer with no need for the original author to push a button.
The only reason this is not an issue with Go is that the language is too young and the reference compiler only supports static linking fot the time being.
The moment the compiler starts supporting dynamic linking the same will happen to Go.
Only exes at the time being. Go is the main loop and needs to have control of the execution flow for concurrency.
If you think of it, DLLs are a bit like separate programs that you communicate with trough the C call stack instead of another structured protocol. In Go you would spawn different processes and use something like protobuf to exchange messages.
I largely agree with you, but for what it's worth, that doesn't "just work": there are places in glibc that use dlopen to pull in modular code; one example is the DNS resolver.
As mentioned, if you try that it works less than 100%. Trying to run a 32-bit program on a 64-bit host will fail in weird ways unless some of the expected bonus libraries are available.
This also makes cross-compiling a breeze. I write software in Go for armv5 chumby devices but I edit and compile on Windows (the Go compiler itself can be compiled and run on the chumby devices, but they are very slow and low memory for doing active development on).
Getting a build for the chumby from my Windows/x64 box is as easy as setting GOARCH=arm, GOARM=5 and GOOS=linux and then re-running go build. Ridiculously simpler than setting up a full gcc toolchain that targets the device, taking into account what libc is on each device, etc. Build on the Windows box, scp the output executable over and It Just Works.
Ha, this is very cool. I didn't know you can cross-compile Go before. Just tried and it's really easy to setup a build for multiple architecture. Thanks for the tip!
This is one of my major annoyances with python and ruby. I want to take all the package dependencies and toss them into a single executable that at most relies upon the interpreter to be there.
With a bit of hacking you can do it but it isn't the most straightforward procedure, can be messy and isn't really supported by the community.
The Python side's actually really easy and well-supported: Python can run code from a zipfile, so tools like `py2exe` or `pyinstaller` get you down to Python interpreter + zipfile. That's two files, but only two files. I don't know if there's similar stuff on the Ruby side.
With other languages such as C/C++ if you link a library into your program statically, that library may dynamically load other dependencies. In Go, all dependencies have to be statically compiled into that package.
Rob Pike made a good point about this in one of the Go talks [1] (I think it was the "Meet the Go team" talk) when they were talking about package management and the static linking, and how in C++ you could get massive chains of useless includes (I think is came out of a question on the speed of the Go compiler). His reasoning was that all the code you need to compile a package should be in the one package binary. You shouldn't need to worry about having other 3rd-party packages that your dependency depends on installed on the target system.
Using cgo is an exception rather than the rule, and when Rob talks about package dependencies he means any file required for building that package, not only the shared objects. This includes C/C++ header files. Even with cgo, to use a package that uses cgo you only need that package archive, you need not care about what files that created that package.
You can do the same thing with any compiled language
This is not necessarily true. Remember the DLL hell? Go does not support dynamic linking at all (some say this is a disadvantage), so you are forced to produce a single self-contained binary.
DLL hell only happens if you use dynamic linking. Static linking is always an option. (actually, is it? I don't tend to use a lot of libraries, how much stuff that isn't part of the OS can you not statically link?)
DLL hell happened (in Windows) because you had to dynamically link against system libraries which could change in incompatible ways making your code fragile.
Depending on the nature of the development toolset, you may not be able to avoid DLL hell. E.g. you may not have the right to build the library into your code and/or the toolset may force you to link against external code libraries.
The original goal of Microsoft's DLL strategy was to save disk space (and presumably memory) but the end result was a compatibility nightmare. Retrospectively, it seems to have been both a bad technical decision (disk space turned out to be a bad thing to trade against other considerations) and badly implemented (too many DLLs went live and then broke backwards compatibility in updates). It seems like other companies managed to allow dynamic linking without creating so much fragility.
It's a great idea and still is. Linux does pretty much the same thing.
It actually broke because they didn't allow multiple versions and windows linked to the latest version at runtime.
They fixed all this with WinSXS which allows a PE file to specify a manifest i.e. a list of required versions of DLLs to load. This is all neatly explained here:
It would be more accurate to say dll hell happened because vendors decided to install their 3rd party dependencies in the windows directory. There are not as many problems with actual Microsoft dlls.
Not if you use C and glibc (most common combination of them all, most software requires glibc). Even if you compile glibc statically in your program (not an option by default at least in Ubuntu), glibc itself will load more shared objects at runtime.
I've had very positive experiences with statically linking against musl[1], As long as your code and the required libraries don't use dlopen() tricks (e.g. as glib GIO does), you can quite reliably create a single statically linked binary for your program. Admittedly, this isn't as easy to setup as Go is.
The other nicety that goes hand in hand with this is crazy simple cross compiling. My current project uses cgo and pcap so I don't get to use it now [1], but my last and next project allow me to hit a single command and have nice statically compiled binaries for FreeBSD, OS X, Linux and Windows in a very small amount of time with almost no setup effort.
I still need to get it wired up to the GitHub API for downloads and life will be perfect. I can push to master and upload a build for download all in one fell swoop.
([1], cross compiling is not [currently?] supported with cgo)
The code presented is not thread safe. It reads and writes to Counter.c from two different goroutines.
It's easy to make this mistake in Go as it's trivial to make your program not only concurrent but also parallel if it utilizes more than one thread (needs runtime.GOMAXPROCS(), which will go away in the future so will be the default).
If you use goroutines, always make sure that you are not accessing data from more than one goroutine at a time unless you are using lock or atomic instructions (which would be easy in case of a counter).
The alternative is to think of passing ownership of data when transferring it over a channel. The receiver is now in charge of it and should be the only one accessing it.
Darn. I just knew that when I was extracting some showable bits of code from Railgun and making simple examples that I'd mess something up. The third Gist in there isn't anything like the Railgun code (it was just showing the use of the Counter object) and, as you point out, it is not thread safe.
I will update the blog post with a version that uses a channel and select to send the count.
Hi John, https://gist.github.com/3039932 has problems; w.Count() is evaluated every time around the for-loop even if the result isn't sent down the count channel. See http://play.golang.org/p/pbCX13my0q where I've altered lines 60-62 yet there's still only "29 bytes written".
Yes, that's correct. It does look a bit odd doesn't it. As you say it's because the w.Count() gets evaluated by the select each time around the loop and so the count gets reset to 0.
Probably the best way to fix that is to make Count() non-destructive and have an explicit Clear function that gets called when a count has been delivered.
It's almost like sometimes it would have handy to do
case ch <- defer foo():
and only have foo() be evaluated if ch is ready for writing and also chosen amongst the cases. As the (fixed) code stands either the Sprintf() and Sum() or Count() is wasted on each iteration. I suppose in some cases you can calculate the initial values before the for-loop and then replenish them in the case: when they're used.
I'm having a hard time understanding the benefits of goroutines compared to Actors on top of the JVM.
I never played with Go, but I was somehow under the impression that goroutines behave like processes and thus the only way to share data being through messages.
They might be theoretically equivalent to some degree, but in practice they are quite different, and I find reasoning about both concurrency and parallelism using goroutines and channels much more natural.
Have you used coroutines and threads before? Goroutines are basically coroutines multiplexed on threads.
If you want to, you can even just think of them as super lightweight threads.
They are not like processes, they share the same address space.
> They are not like processes, they share the same address space.
Yes, but is worth remembering Go's (concurrency) motto:
Do not communicate by sharing memory; instead, share memory by communicating.
With channels you can send values, or pointers to the shared address space, so is much more efficient, or even channels (channels of channels are a very powerful and useful concept).
Most of the time, following this principle will make sense. It should be also said though that it's not always the right choice.
There are situations where using locks or atomic instructions is the better choice. But they nearly only come up in performance critical contexts.
It would be very nice to have some kind of compile-time enforceable ownership of data regarding channels and goroutines because debugging thread-safety issues is a pita.
I don't know how feasable it is to implement this though.
It is feasible if the language is carefully designed around it. Rust implements this through its uniqueness typing system. It is impossible to have two threads share mutable data.
Now if only google supported Golang as a first class development language on the Android platform...I would divorce and drop Java in a heartbeat.. sigh
John, I have a question for you. I didn't know you worked at Cloudflare, so I looked up the "people" page last night, and I'm intrigued that while there are many "engineers", you are the only "programmer".
I assume you got to choose your title, and I'm interested to know why you chose that one. (For what its worth, it's a title I think personally wish more people should adopt, especially those who don't do formal engineering.) Is it a US vs UK thing?
Although I like Go as a possible replacement for C, it always amazes me when people describe Go features, as if they were new and not already available in several languages, that for whatever reason are not currently mainstream.
It makes me think that the development community has a serious problem knowing the history of computer science.
If you watch the "Meet the Go team" talk from Google IO, they explicitly mention that almost nothing in Go is particularly new, and most of its ideas have been around for decades: http://www.youtube.com/watch?v=sln-gJaURzk
What makes Go great is not the individual features, but the selection of the features and how well they work together, and also the "features" that were excluded from the language.
When I run the second example on play.golang.org, the output I get is:
This program will get 10 IDs from the id channel, print them and terminate
0a90333b6c0f5519b6e5f7ea2d541fb1793b957d
58 bytes written
af2f87580bb3562b58f2ad076e25f76ebf8fec75
0 bytes written
4491029b704d192be5a31eb0c35dfe516274d173
58 bytes written
d97a9080d01e9a8a11d0152d3df34f1c90f5cf16
0 bytes written
130c9fda35c98c7e8922d6ea24bb873838cfdcbd
58 bytes written
0e385bcc65efb6bac8ced67639c32b374d31ae57
0 bytes written
550413e7244dd40cf3df52634bb341dc166708b2
58 bytes written
49cfddab6ff9123ad237f03176927329349e186b
0 bytes written
12abf2ab19e52065b3f7367a57f6f8ecd4968b44
58 bytes written
0755254307f5f9b90829381fa797b5f26081cd5f
0 bytes written
That's left as an exercise for the reader. Look at the channel synchronization and follow the code for both routines (the main and the goroutine). I'll admit in this case that it looks pretty deterministic, but in other cases it could be a different pattern.
There was a question specifically about the GC during the "Go In Production" panel at Google IO, and everyone found that the GC was up to snuff, specially on 64bit systems:
This is such a subtle yet profound feature. Think about it: no more dependency nightmare during deployment!