"Because concurrent Go programs use channels to pass not just values but also control between different goroutines, it is natural when reading Go code to want to navigate from a channel send to the corresponding receive so as to understand the sequence of events.
Godoc annotates every channel operation—make, send, range, receive, close—with a link to a panel displaying information about other operations that might alias the same channel."
Understanding large concurrent programs is significantly easier with a precise pointer analysis to help you track objects.
GC changes are probably most exciting ( deference to the support for Plan 9 ;) )
I still can't tell if it's evolved beyond a mark-and-sweep - I have to assume no - we've heard the GC would be seeing improvements and that the current (now older) method was stop-gap.
>> The garbage collector has been sped up, using a concurrent sweep algorithm, better parallelization, and larger pages. The cumulative effect can be a 50-70% reduction in collector pause time.
The release notes mention the graceful shutdown of a http.Server:
"The net/http package now provides an optional Server.ConnState callback to hook various phases of a server connection's lifecycle (see ConnState). This can be used to implement rate limiting or graceful shutdown."
We use Godep, it is very good. As per another answer to your question: with '-copy=false' it behaves a lot like bundler.lock. Having spent a lot of time working with it we've found a few areas where you can get burned a little; particularly if you've structured your repos as a set of libraries, as seems to be the encouraged golang pattern.
When you have multiple libraries you have to be very specific about when you run godep, lest you find yourself with two libraries needing different versions of a common library, for example Main imports Foo and Bar, which both import Baz. Godep provides a mechanism for handling this: each dependency is explicitly locked into a fixed revision (e.g. commit sha, in the case of git). The pain comes about when during debugging as it can be very hard to reason which version of a library you're using.
Additionally the revision aspect is also a bit of a PITA, we use a development flow which rebases our small commits into a big commit and then merges that into our master branch; if you ran godep prior to that you're now referencing a commit that no longer exists. Given the chain of references that can exist this can go a very long way down. This same pattern also forces you into needing to push your dev branches to an origin server, as godep checks out the repos during the build, which while pretty benign a concern is a PITA if you forget and your build breaks because of it.
We're strongly considering moving to "one big repo" to help combat this issue (as well as a few others) for our internal golang repositories. Referencing "published commits" in 3rd party libraries is an acceptable level of pain. We're not entirely sold on this yet... just considering it.
There seem to be a lot of comments here recommending godep, but just to throw my experience in: none of the projects I've interacted with use godep (other than the Heroku buildpack, which was written by the author of Godep).
It seems to be a solution for some (not all) projects that are released in binary form, but that isn't relevant to most projects out there[0]. I have never felt the need for what godep provides; vendoring myself has been sufficient for the (very rare) case in which I need specific versions of dependencies other than tip/trunk.
I asked around on #go-nuts, and (though the sample size was small), the other regular contributors who idle in the channel seemed to have the same experience.
"Cross compiling with cgo enabled is now supported. ...
Finally, the go command now supports packages that import Objective-C files (suffixed .m) through cgo."
I'm a little confused at what it takes to get this going. I want to use cgo for a linux/arm target, built on darwin/amd64. Do I need to first build a gcc toolchain for linux/arm on my Mac?
> Previously, all pointers to incomplete struct types translated to the Go type *[0]byte
This makes me really happy. I end up writing a lot of bindings to C libraries (libjpeg, libpng, etc), some of which use incomplete types to hide the contents of their internal structs. With this fix I'll finally be able to work within the constraints of the typesystem and stop hacking around it with casts to/from unsafe.Pointer.
warning: GOPATH set to GOROOT (T:\Tools\Go) has no effect
go build runtime: windows/386 must be bootstrapped using
make.bat
So, if you installed Go in a different directory, ignore the Windows environment vars "PATH" entry and just execute the "make.bat" in the "Go\src\" directory - the first time.
As a scientific programmer I find it a shame that they have decided not to go with operator overloading. The rationale from the FAQ is that it "seems more a convenience than an absolute requirement". In the case of scientific software though it usually is a requirement.
I'm honestly glad that Go, unlike C++, does not have features that are useful only for a small subset of programmers. The only 2 cases I've seen where operator overloading is useful are:
1. String concatenation (Go has it)
2. Matrix/vector operations
That's it. Any other use case for operator overloading is dubious at best. It sucks for scientific programmers, but Go is a general purpose language, not a Scientific Programming language.
Complicating parsers, the grammar, readability, all those things, just to please your sub-group -> no thanks.
Operator overloading has absolutely no effect on the parser or grammar.
It may harm readability but that's more a question of naming. An operator is just a name. If you use that name to refer to something unexpected, you'll harm readability. If you use it for something intuitive, you'll improve it.
I don't disagree with you. I'm not suggesting that Go designers made the wrong decision with the goals they had. I am jealous that it's less useful to me though.
Because it lets your math code look more like math and less like code.
v := Vector{}
v2 := Vector{}
to add the two vectors, currently you have to do something like
result := v.Add(v2)
rather than the nicer
result := v + v2
In the small scale, it doesn't seem like a big deal. In a large and complicated scientific program, it can make the code a lot harder to read.
Note: I think it is good that Go does not have operator overloading, though I think it's a shame that means it's not as good for scientific & math programming.
Is it really a "requirement," though? It sounds like it makes it easier to look at formulas as you would, say, on paper, but you can still build the same things as methods & functions.
It's not a "requirement" in the strict sense of the word, but I think it's useful enough to prevent the use of Go in science. Java has essentially suffered the same fate in science.
Say I had an array of masses and their velocities and I wanted to calculate the kinetic energy of each mass using the equation:
E = 1/2 mv^2
With operator overloading:
E = 0.5 * (m * v)**2
Without:
E = (m.mult(v)).pow(2).times(0.5)
The operator overloaded example is Python (numpy) - the non-overloaded one is something I made up, but it's basically what it would need to look like.
I think the first example is much closer to the maths.
This is not some contrived example, if you have raw data and you're using mathematical equations to work out relationships you do this kind of thing all the time.
Look at this http://stackoverflow.com/questions/11270547/go-big-int-facto... and compare how the function looks with int and with big.Int. Interestingly, big.Int has a method MulRange, which does exactly what the function would otherwise do, but this won't be the case the majority of the time. Given the extra tedium involved, someone working heavily with vectors, matrices, big numbers, etc., would certainly care about operator overloading.
I am not a scientific programmer, but I'm going to attempt to provide an example anyway until someone else does one better.
I think there are certain mathematical operations that operate over what would be implemented as complex objects, so it is convenient to continue using these agreed upon symbols to implement your work.
// addition and multiplication of native integers
1 + 3 * 4
// add and mult of math objects
matrixA + matrixB * matrixC
// here the math is slightly occluded
add(matrixA, multiply(matrixB, matrixC))
There are proposals out there for adding multi-dimensional matrices to the language. I do like the recent proposal for 2-D matrices I read about. I think that would be a better for the language overall, than adding operator overloading.
Just as a clarification, the proposals are to add multi-dimensional slices (dynamically sized arrays). This statement is splitting hairs, but a Matrix is a container for numeric types that can do things like multiply and have a cholesky decomposition. A multi-dimensional slice is just a container for whatever you want. Such a container is very useful for matrices, but a package would still have to turn a multi-dimensional slice into a Matrix in order to add methods like Add.
I'm very curious why you say operator overloading is a requirement. It's certainly nice to look at code with operator overloading when you're trying to understand a problem (or maybe it's a curse when you're trying to understand what a block of code actually does!), but it's the plainest case of syntactic sugar I can think of in language design. What are you getting from operator overloading that you can't get from chaining methods?
If you work with vectors and matrices and do a lot of computations, it is very convenient to be able to express, thanks to operator overloading, the formula you have on paper into code with nearly the same syntax. This is one area where, for example, FORTRAN still shines. You can can really TRANslate your FORmula into code easily.
In the case of FORTRAN it is because the vector and matrices are recognized types, in other language, they are not built-in types but the convenience is added thanks to operator overloading.
Yes, exactly. The same could be said for computer graphics, where we traffic in vectors and matricies all day. I want a language in which those are base types, will all of the proper operators already defined, so I don't need operator overloading. Yes, I could write in Fortran, but something a little more modern would be nice.
Fortran is coming up to speed pretty quickly. The 2003 and 2008 updates to it are pretty nice. In fact, I'm in the process of porting the nanomsg sockets library over to it. You can even use GTK3 and Glade to make GUIs for it now, too. I do a lot of matrix calculations, so Fortran is absolutely invaluable.
A simple preprocessor for transforming custom infix operators to method calls might do the trick here. Go comes with good libraries to make this, you wouldn't have to invent a new Go parser. There are other preprocessors that you can look at for inspiration.
That would produce a custom language not understood by other Go users and destroy much of the benefit of the compiler reporting errors straightforwardly.
Swift's high-level syntax will probably open up iOS development to many people who otherwise would have seen Objective-C's square brackets and run away.
I like Go a lot, but Google switching to it for Android would probably have the opposite effect. Many people learn Java in school. Almost no one learns Go.
People don't learn Objective-C in school, but it has not slowed down adoption of the language for the iOS platform. If the platform is popular, and a language supports it well, programmers will learn.
I love Go; having Go as an option for Android development would be awesome because the language is just so much fun. I'm not getting my hopes up though, because Go was intended for server-side stuff. Object-oriented design is a great strategy for rapid app development, and this is the key different between Swift and Go that I've noticed.
If they add generics, I would expect it would be in a 2.0 or some other release where they're willing to make possibly backwards-incompatible changes, not a point release.
I think it's pretty clear by now Go is not aligning itself with type parameters. If you need generics, don't use Go. Most programs probably don't need generics.
That was the common understanding from 2-3 years ago, when they left the question hanging and said that "we'll do it when we find the proper way etc".
In the meantime there have been several satisfactory proposals (in the lists and elsewhere), and it's not like 200 other languages implementing Generics have had many issues with them.
So, it boils down to the Go core team putting up the solution to an impossible tradeoff as the holdback (and a little exagerrated one at that, with relation to the costs involved).
And then, they said "the language won't change" etc recently.
I'd like to add that generics can add a lot of complexity to a compiler, and the compiler's simplicity is IMHO of great benefit to the language. I'm happy they're taking their time with it; it's better than ending up with a half-assed implementation like Java.
Most programs don't need generics, but a lot of programs benefit from having them. This, along with exceptions, are the two things that keep me from considering Go for future service development. We've stood up a couple of successful Go services, but the maintenance overhead from not having these two things has been substantial.
Go has something that is functionally equivalent to exceptions; they just don't call them exceptions and make them look a lot different from what we are used to.
If you're talking about panics, you can only handle them in defer funcs if I'm not mistaken, which makes them significantly different from exceptions that I'm used to that can be caught in any part of the code. The dependency on defer in my opinion makes them unusable as typical exceptions in my case. This was perhaps the intent, making panics and the recoveries thereof truly exceptional and painful, pushing one towards the normal error handling semantics, which is what I dislike.
> If you're talking about panics, you can only handle them in defer funcs if I'm not mistaken,
That's true...but no different from "exceptions can only be caught in catch blocks".
> which makes them significantly different from exceptions that I'm used to that can be caught in any part of the code.
It doesn't make them any different from exceptions -- just as a catch block can be anywhere up the call chain, a deferred function can have been set anywhere up the chain.
Defer is basically "finally", except the position is different, and "recover" inside it lets it also do what "catch" does.
It's true that you can only handle them in defer funcs, but the defer func could be anywhere up the stack, not just the current function you're inside.
(I'm not sure if that was your understanding or not)
"The panic and recover functions behave similarly to exceptions and try/catch in some other languages in that a panic causes the program stack to begin unwinding and recover can stop it. Deferred functions are still executed as the stack unwinds. If recover is called inside such a deferred function, the stack stops unwinding and recover returns the value (as an interface{}) that was passed to panic."
Yes and no, errors are just return values. They’re not special, they are conventional. So you get the whole type system to express errors – they’re just values. And you don’t have a different control flow.
It’s not without warts. You can bail with panic(), which is essentially a throw.
It’s tedious to handle errors, but that’s because actually handling errors is tedious.
"Actually handling errors" isn't tedious at all in Scala. Chain flatMap() across methods that use Try[A], handle the Failure[_] at the other end, and you're done.
I'm not joking when I say it really is that easy. Go makes it (and, really, many other things) very difficult for reasons that are at best murky.
So you call say 3 methods, and at the end you have a "file not found" error... which call resulted in that failure? I don't know Scala specifically, but from what I know of most languages that support this kind of chaining, you can't tell. And that's the problem. Did your initialization fail to find it's config file in step 1? Did you fail to find the target file you were going to transform in step 2? Was there some other failure in step 3? You can't tell, so you can't actually handle the error.
This is effectively like doing this in Java:
try {
DoX()
DoY()
DoZ()
} catch( Exception ) {
// The code has no idea what failed here.
}
Whereas, the go code looks like this:
if err := DoX(); err != nil {
// handle error from DoX
}
if err := DoY(); err != nil {
// handle error from Doy
}
if err := DoZ(); err != nil {
// handle error from DoZ
}
This is what Go programmers means when they say "actually handle the errors". At each step you handle the specific failure from the specific call. It's somewhat more verbose, but it's a LOT more robust against real life failures.
You'd have the same effect (you don't know what specific call failed) if all of the comment lines said "return err", which is a common pattern in Go. The Scala approach mentioned is just sugar for that. You can of course handle each case separately if you want in Scala, just as you can in Go, but the common pattern has sugar.
> So you call say 3 methods, and at the end you have a "file not found" error... which call resulted in that failure?
The first exception hit pops out the other side and you pattern-match against it. So, the one that did file access and returned a FileNotFoundException. If you have multiple pieces of code that can return FileNotFoundExceptions, you can pass a message in the exception, just like any other Java exception. You can often be more type-specific, too. Bear in mind that defining a new exception in Scala is a one-liner, and you can encapsulate your FileNotFoundException in a RetrievalFailedException very easily and cleanly.
Your method is not more robust, I assure you--it's just verbose and both typo- and thinko-prone.
Errors are just return value, but in go, you either explicitly ignore the error returned (using the "blank identifier" _ ), or, you assign it, in which case you have to deal with it (otherwise go will complain about an unused variable).
An that is what makes go awesome. In java or php, you, as a developer, can never know wether the functions you are calling will throw an exception. The only way to know? Read the doc, if you're lucky and there's a doc, or read the code... The result? your program will crash if you didn't add your try..catch block.
Go forces you to either explicitly ignore errors (and your fellow co-worker will know you did it on purpose) or deal with them. No surprises.
The larger problem with exceptions, checked or not, is that they interrupt the flow of your program, which is too bad since most exceptions are not that exceptional. That problem is exacerbated by developers who don't know how and when to throw exceptions and end up throwing exceptions for easily recoverable errors.
The other thing I dislike about exception is that your code ends up with tons of try..catch blocks.
It's that convention and exclusion of special abstractions that appeals to me by keeping the cognitive load to a minimum. I can focus on the problem without thinking about the language. If Go became like various other languages, then what would be the point? We already have Java, C#, ...
The only real significant difference between panics and exceptions in the Java/Python/etc. style is that -- reflecting (apparently) the same philosophy of explicit-over-implicit that governs other Go error handling -- the common catch-one-kind-of-exception-and-implicitly-rethrow-everything-else idiom of the Java/etc. language is inconvenient and verbose to express in Go (though if you really want to use that idiom a lot, it seems like it should be implementable in a library function to create that kind of handler that would then eliminate the boilerplate on each use, though given Go's convention on panics generally not crossing public API boundaries, you shouldn't need that idiom as much in Go code.)
That's entirely different. Generics can be replaced by code-generation (C++-style templates) in a preprocessor with no loss of safety, or by boring copy-paste (with some loss of safety :-().
Templates/macros/preprocessing do sacrifice separate compilation and separate type checking; so your error messages might not be comprehensible, your compile times can be much slower, and...you need source for your generic container. Rob Pike probably knows the drawbacks of that approach well and I doubt he would go there.
I agree generic would be a serious win for Golang. However in the meantime we have this workaround/hack which alleviates some of the pain http://clipperhouse.github.io/gen/
Lisp manipulation? Do you mean linked lists, or arraylists? The only common use of linked lists I've encountered is writing custom allocators, and most programs don't need custom allocators.
A Go map is of the defintion map[KeyType]ValueType. A Go slice and dynamic array is similar typed. That isn't heterogeneous, it isn't of type object, and is 99% of the common use of generics.
Yeah, the native fundamental collections are generic.
I have a feeling we're talking about different things: when people speak about Go generics, they mean that the language doesn't support parametric polymorphism. Yes, some of the built-in types are special. That's not the point.
"Because concurrent Go programs use channels to pass not just values but also control between different goroutines, it is natural when reading Go code to want to navigate from a channel send to the corresponding receive so as to understand the sequence of events.
Godoc annotates every channel operation—make, send, range, receive, close—with a link to a panel displaying information about other operations that might alias the same channel."
Understanding large concurrent programs is significantly easier with a precise pointer analysis to help you track objects.