I really liked the turkey, and I thought the solution looked elegant, so I had a go at it in CoffeeScript and node-canvas: https://gist.github.com/1475034
All requests in less than 66 ms, mean less than 19 ms.
AppEngine instances, unless they're running in a special backend instance, exist in a special containerized machine. These machines are very slow and don't have much RAM - 600 MHz and 128 MB. Based on the source that was posted on the blog, this app was running on a normal instance, as the source did not contain a backends.yaml file.
So it makes sense that a Core 2 Duo running at 2.13 GHz and slow CoffeeScript would be ~2x as fast. My personal experience of running Go code locally (on a Core i5 2500K) versus on a normal AppEngine instance showed a slowdown of around 6x or more.
I can't argue with that, though I'm running this on just one core (single process).
If I was better with the v8 profiler I would check to see how much time it spends in JS land vs C, since node-canvas is all C, as well as the http parser.
Go runs on a single core if you use channels anyway unless you specify an environment variable (something like GO_MAX_PROCS). Not sure about threading libraries since I haven't really considered using them in Go. I'm actually curious what people are using them for in Go - maybe games?
This was interesting because I've never seen Go used in production yet. However, the solution is not optimal for performance (as his benchmarking suggests is one of the goals).
If there is a finite number of possible images and each image takes CPU time to generate, then why not lazy cache the images in a CDN after generating them once?
This works exactly as you describe - the app sets a cache header on the response and Google's geographically distributed front end servers will serve the cached response. You get all this for free by using App Engine.
The more complicated something is, the more chances for failure there are. Remember, this was only for compositing the image that would be shared to google plus -- the one in the browser is done client-side.
I'm working on a game with highly-stylized art that includes a lot of blobs of single colour. It surprised me at first, but the artists have had real gains using JPG instead of PNG. Also, strangely the artifacts give the art a kind of "texturized" finish that's actually kind of pleasant on tablet screens.
Maybe it's for performance reasons. I have a web server in go that generate (big) png on the fly and most of the time is spent in the png encoding (mostly the compression). PNG encoding is a heavy task.
Maybe its just me but all blogpost about go read like they where checked by a marketing guru after the have been written. The always enforce the same basic points, it always sound the same. If I somebody says "go" my mind alwasys jumps to "feels like a interpreted language".
> somebody says "go" my mind alwasys jumps to "feels like a interpreted language".
Because it does! I'm a pretty hardcore Pythonista who has never really gotten very far with compiled languages. I wrote a simple shell in C, once, but beyond that they never felt right. Somehow, Go manages to get things sufficiently "right" for me.
But more importantly, I think Go has a very strong chance of being the Clojure moment for Algol-derived languages. It's trying to address many of the same problems, and taking a similarly refreshing and pragmatic approach. Not to mention that the system-level language ecosystem is quite overdue for some reinvigoration. The glowing tone of so many Go posts may be, in part, due to a fundamental yearning for something that gets so much right -- something like Go -- to finally take off.
Well I'm the guy who runs that particular blog, and I can tell you I'm no marketing guru. While Reinaldo and I edited the post together, I'm pretty sure that the line was in his original text. (Just checked: it was.)
Maybe people keep saying Go feels like an interpreted language because it feels like an interpreted language? :-)
I've done a fair bit of Go programming (https://github.com/jbarham), but have programmed professionally mostly in Python for the past 10 years, and for me Go does feel an interpreted language because it requires so few type declarations compared to mainstream statically typed languages like C++ or Java.
Checking the app source code for the article at http://code.google.com/p/go-thanksgiving/source/browse/app/a..., the only type declaration I can see in the body of a function is in a type switch where it obviously makes sense. Otherwise the code uses the combined declaration and assignment := operator (http://golang.org/doc/go_spec.html#Short_variable_declaratio...) which is type-safe because it infers the types from the value(s) on the right hand side. IMO this is even better than working w/ a dynamically typed language like Python because the required parameter type declarations in the function declaration tell you the types you're dealing with, but unlike C++ or Java you don't have to repeat that information back to the compiler.
I see that claim everywhere as well, and I don't understand it. Does it just mean "it has a type system that you won't hate"? There are still plenty of type declarations in that code, whereas if you ported it to OCaml they'd all be inferred.
Yes, basically. As far as I can tell Go is entirely about getting rid of unnecessary cruft and boilerplate and making a practical language you won't hate working in. Inasmuch as most interpreted languages are intended to be practical languages you won't hate working in, they are similar to Go.
As far as the actual type system, the reason they feel similar is probably because interpreted languages tend to be duck-typed, and so is Go. When you code in an interpreted languages you expect arguments to functions to be of a certain type, where type just means "responds appropriately to methods I call on it". In Go, interfaces are just method specifications and any type that has appropriate methods satisfies the interface. For example, you might have an argument "input" of type "io.Reader"[1] to a function. All this really means is that you expect to be able to call "input.Read()" to populate a byte slice. As a result the implicit type guarantees you expect from an interpreted language and the explicit type guarantees you get from Go are basically the same.
> As far as the actual type system, the reason they feel similar is probably because interpreted languages tend to be duck-typed, and so is Go. When you code in an interpreted languages you expect arguments to functions to be of a certain type, where type just means "responds appropriately to methods I call on it".
The person you respond to talked about OCaml. From this I infer he at least has some basic knowledge of OCaml. You don't seem to have any.
Here's a hint: OCaml's object types are structurally typed (and OCaml can actually infer types).
Your whole second paragraph is unnecessary (and unwarrantedly condescending), and the first one is basically a lie (Go is chock-full of special syntax and cases, and Go code is full of cruft).
Oh I can expand on that if you want, though I expect you don't and you'll just handwave it away as is usually done of criticisms of Go.
* Go uses a special syntax for multiple return values, this is a restriction on the more general concept of tuples. Go could simply provide tuples instead of having MRVs be a syntactic special case. This makes the language smaller and simpler: instead of `,` being a magical syntactic feature of the language, it's just an operator for building tuples.
* Go has generic types, but only for blessed types implemented directly in the interpreter (map and channel, for instance). All user-defined types are second-class citizens at best. That's elevating special syntax to new heights.
* Go has two different initializers in `new` and `make`, `new` is garbage as it only allocates and can't be used to initialize and `make` only works with (again) a restricted number of types living directly in the runtime, which get to have special treatment for the only reason that new is insufficient.
* Talking about things only builtin types get, only builtin types get to be indexed via an operator, this is special syntax dedicated solely to builtin types.
* `defer` and `go`. They feature special magical evaluation order (they have to be a complete expression, all inner expressions are evaluated immediately and the outer one and that one only is not evaluated) which leads to weirdness like creating an anonymous function and calling it immediately, and they're not actually needed due to Go having anonymous function in the first place, both could be builtin function taking a callable instead of being special forms and they'd work just as well (better in fact, since their evaluation model would be the same as everywhere else in the language). These special cases are even weirder when you realize that `recover`, which needs to hook deep into the interpreter and do actual strange stuff to stop the stack unwinding in place, is a builtin function rather than a special form.
That should be a good start, 4 clear-cut examples of special syntax (either unnecessary or which could/should be general) and one of a special case.
Here's some kneejerk devil's-advocate responses to some of your points. As an idealist I generally agree with you, but I can see practical arguments against many of your nitpicks:
* Having first-class multiple-return as part of the specification allows compilers to generate stack/register allocated return values. If the comma operator were a tuple-constructor, then every multivariate return would be a heap-allocated structure, which might have significant performance implications. (I don't know if the Go compiler actually takes advantage of this on any architectures, but the possibility is there.)
* Agreed. Go's way is perhaps simpler, but definitely limiting.
* Here I think Go does things entirely wrong. I actually like that `new` only allocates, because it lends itself to designing very minimalistic and elegant data structures, something that I feel gets out of hand in many large programs in other languages. Being a GC'ed language, I'm OK with the idea of just exporting and documenting some static initialization functions along with the type. But that begs the question of why `make` exists, since it is basically a first-class syntax feature for initialization of the magic builtins that need it. Consistency, please.
* This is pretty silly, I agree.
* I can see definite reasons why the evaluation order of `defer` and `go` should be as they are. Here's an example use case: I want to defer a statement to print the current value of a local variable. If defer took a callable, then the only way I can see to do that with Go's current syntax would be to (1) define a function that closes around my variable and returns (2) an anonymous function that calls fmt.Print(myvar), and (3) evaluate the original function with the local I want to close around. If you allow more syntax alterations, I suppose you could properly generalize to something like C++11's lambdas where you explicitly state those variables that you want in the closure so that you only need to write one anonymous function, but that is complicated as all hell. The 90% use-case here is `defer Close(abc)`, so even needing the syntax for one anonymous function feels more crufty than the occasional `defer func() { ... }()`. I much prefer Go's way of just saying "We will close around all of the parameters to the outermost function" because it is wayyyyyy simpler.
The problem with "it feels like a dynamic language" meaning "you probably won't hate it" is that it's somewhat insulting to existing statically-typed languages that are actually pleasant. It sounds all right if you've only ever been exposed to Java or C, but it's not like Go is the first language to have a good type system. It's especially odd considering Go's inference isn't able to handle all the types in even such a short sample.
Well, when you're looking at posts by someone who loves a language about that language, its not really that surprising to see positive feedback in it; and that feedback is often about the expressive syntax being nice (edit <-- and that its fast).
...but that said, I wouldn't mind some impartial coverage.
Its stuck me several times how defensive and no-I'm-right the go crowd gets when anyone dislikes it. Try jumping on the google group sometime.
Anyone remember that academic paper about relative speeds of c, go and something or another? As I recall Andrew went out of his way to reimplement the whole thing in go, and make a few snarky comments about "I wonder where they got those bench marks from, because that's not what I saw".
There have been many, many blog posts critical of Go in the past couple of years. They're not hard to find. If you're looking for impartiality, the blog of the people who created Go is not the place to look. I am going to keep pointing out how great Go is, because I use it every day and I haven't ever had so much fun programming. That's the truth.
You mischaracterise the blog post about the C++/Java/Scala/Go paper. It was written by Russ Cox, and he used it as an opportunity to demonstrate our profiling tools and how they can help you write fast Go code. (that it refutes the claims of the paper is just gravy ;-)
It begins: "At Scala Days 2011 a few weeks ago, Robert Hundt presented a paper titled “Loop Recognition in C++/Java/Go/Scala.” The paper implemented a specific loop finding algorithm, such as you might use in a flow analysis pass of a compiler, in C++, Go, Java, Scala, and then used those programs to draw conclusions about typical performance concerns in these languages. The Go program presented in that paper runs quite slowly, making it an excellent opportunity to demonstrate how to use Go's profiling tools to take a slow program and make it faster."
There's insufficient data to draw conclusions about the performance of the infrastructure. The application probably ran on a shared machine. We don't know the impact of the other processes running on the machine. We don't know how many requests the application handled in parallel, so we cannot divide by latency to get throughput. We don't know if the application ran on a modern machine.
'0' has the decimal value 48, not zero. '!' has the decimal value 33. Here's a demo program that you can run from your browser: http://play.golang.org/p/qAKG9j5E4V
All characters have values greater than or equal to zero.
Long answer, the Go authors (Rob Pike et all) have a pedantic issue with exceptions as implemented in Python, Java, etc. They feel they conflate errors and exceptions. Errors, unlike exceptions, are an expected part of the programming process. Exceptions, on the other hand should be for exceptional circumstances.
here is an a typical example of the Go community's attitude:
"""
I _especially_ don't want exceptions to become an oft-used alternative to multiple levels of error return, as at that point they deteriorate to action at a distance and make understanding large code bases much harder. Been there, done that, got the scars to prove it. (Mostly from C++ and not from Java, but I've seen enough Java to have a healthy fear of runtime exceptions.)
"""
Here's a good Rob Pike quote from later on in the thread:
"This is exactly the kind of thing the proposal tries to avoid. Panic and recover are not an exception mechanism as usually defined because the usual approach, which ties exceptions to a control structure, encourages fine-grained exception handling that makes code unreadable in practice. There really is a difference between an error and what we call a panic, and we want that difference to matter. Consider Java, in which opening a file can throw an exception. In my experience few things are less exceptional than failing to open a file, and requiring me to write inside-out code to handle such a quotidian operation feels like a Procrustean imposition.
Our proposal instead ties the handling to a function - a dying function - and thereby, deliberately, makes it harder to use. We want you think of panics as, well, panics! They are rare events that very few functions should ever need to think about. If you want to protect your code, one or two recover calls should do it for the whole program. If you're already worrying about discriminating different kinds of panics, you've lost sight of the ball."
Not really, no. I've always found them more trouble than they're worth. Explicit, in-line error handling works much better for me.
I'm not sure what you mean by "over and over again," in this context, though. There are only three error checks of that kind in this program. It's an unusual program anyway, in that _any_ error condition jumps to the same path: displaying the default image. In most programs you want finer control over error handling than what's shown here.
I doubt that's really the case in proportion to lines of code. Three error conditions handled by one and the same handler doesn't seem unusual at all. Even 10 to 1 or more wouldn't be unusual I believe.
In my view, all attempts at making a distinction between recoverable and non recoverable errors inevitably fail simply because it is a non local distinction that cannot be made for any particular piece of code in isolation.
Reading the same code I also cringed, but for a different reason: the validity of `paths` is tied to the validity of `err`; you should never look at at `paths` without first checking `err`. We can solve this much better using sum types (also known as discriminate unions):
// Pseudo Go:
maybe_paths := filepath.Glob(dir + "/*.png")
case maybe_paths of {
Some paths: {
// Do things with paths
}
None: {
// Handle error case
}
}
This makes it impossible to use `paths` if `Glob` returns an error, as `paths` won't be in scope!
Furthermore, not having sum types forces us to include a distinguish value `null` for reference (pointer) types, as a way to communicate "no value". This is bad as now we cannot distinguish references that should never be `null` from those than can be `null`.
You can distinguish between nulls which are errors and nulls that are just the actual return value. That is what err is for.
The entire reason Go has multiple return values for functions (such as paths and err in this case) is to avoid having to encode errors in the "real" function result.
Now I agree, that this is less nice as say pattern matching and real sum types, but if you know how to appreciate those just switch to a functional language!
> The entire reason Go has multiple return values for functions (such as paths and err in this case) is to avoid having to encode errors in the "real" function result.
That's nonsense, MRVs are tuples (conceptually, Go implemented them with special syntax because... well, it's Go so it could not go with the general principle now could it?), so Go very specifically encodes errors in the function result.
And using a sum type would encode the error in the type, not in the result itself.
This kind of fussy type system is at odds with Go's design goals. It would make the language more abstracted from the underlying machine for little gain. I can't think of an instance of the issue you describe causing problems in real Go code.
> This kind of fussy type system is at odds with Go's design goals.
There is nothing fussy about it, and the only way in which it's at odds with Go's design goals is that Go's design goals include things like "fuck you, only core types get to be generic", "that these mistakes were resolved 20 years ago does not mean we're not going to re-introduce them" and "if we give it a different name it's not the same thing, nah nah nah".
The two decades of experience we've had since the 90s, building things with exceptions, have fairly adequately demonstrated that they're not superior to return codes, despite their (apparent) theoretical advantages.
A lot of those criticisms seem to be based largely on Java - which, with its checked exceptions, is pretty much the worst case in the use of exceptions in a language design.
A lot of people in this sub-thread don't buy that return codes are inferior, but they really are. The issue is that the low level code can detect the error and knows some strategies that could be done about it but it's only the high level code that knows what's best (here's the best possible approach [1]). If your library is used by a batch job running on some network-detached headless server it will probably have to handle errors differently than a fat client would.