> A straightforward solution of the problem is superior to one that makes you feel like a high priest for having discovered it.
It is also much better for the team and the company. Beginning musicians are just trying to play. Intermediate musicians are try to play as fancy as they can. Master musicians play what's needed by the tune.
It takes somebody who's past all the ego issues to devote their intelligence to making code look simple. It takes great intelligence to solve a complex problem in a way that looks simple and straightforward.
I’d love someday to hear a young coder tell a story about someone they idolized like, “There was this guy I worked with who once optimized a complicated red- black tree getting 300% performance boost. I was baffled and ask, ‘How’d you do that? That’s impossible.’ To which he responded…”
“‘That’s my linked list my son.’”
I believe that this is exactly what Niklaus Wirth did in the Oberon compiler.
Not unless he had a TARDIS at his disposal. I mean, skip lists were discovered in 1989, and by that time, the Oberon compiler had already existed for a few years. I don't recall the source now (I'd have to dig for it) but I believe that the Wirth anectode referred to his intervention in a case of overzealous students some time before that.
I found this part interesting, in contrast to his "expert amateurs" like Knuth:
> In contrast there are masters in the martial arts who learned their art as a means of survival and became masters in a realistic and hostile environment. We don’t have anyone like this in the programming profession, or at least I haven’t met any.
Any candidates spring to mind? John Carmack maybe? I'm sure there are loads though.
But Unix did, specifically from hard lessons learned from the failure of Multics. Sure, we now know some aspects of Unix to be suboptimal design, but that's with further decades of collective experience and even today the right answers are not yet entirely clear.
It's better for everyone who will use/reuse/maintain/improve that code in the future (including the person who wrote it). But of course it's not easy, and takes extra time.
Personally, at this time, I feel a lot more satisfaction when I'm able to truly understand the problem I'm working with, and refactor complicated/hacky/duplicated code into a functionally equivalent but simple and elegant code. Smart/complex code that works is a great intermediate step though, it's way better than no code. :)
Rob Pike has a very relevant article [1] on this topic.
> Smart/complex code that works is a great intermediate step though, it's way better than no code. :)
I'm totally with you on that. Often, the way to solve a problem is to first solve it wrong. In the process of doing that, you can see the way to solve it right. Trying to reach the endpoint in one fell swoop of envisioning and coding is actually another beginner move I've seen a bit.
What I'm totally not down with is complexity as an end goal. And yes, I've seen this mentality in programming.
For the love of Thor, please provide a few code examples illustrating the content, hard to get an idea of how to simplify things with Go with mere words and ideas.
Definitely. I'm left with no idea how Go actually helps. You can trivially implement 'pipelines' in any language by making a handful of queue structures and using them to feed your functions. Why were there callbacks in the first place?
My thought exactly. The whole thing strikes me as a case of: Have a hammer and everything looks like a nail. First with callbacks, and second with channels.
It is quite hard to read a stream of text about programming without examples.
But reading it, it seems like his problem could be more with Perl, which certainly has a reputation of supporting and encouraging unnecessary complexity. But argument is so vague, you could write the same article with Python substituted for Go - at least as far as the knowledge the article gives one about Go goes.
So, Go has message passing. Just like Erlang, since 1986.
Only without per-process heaps, which make crashing processes safe. And without process linking and supervision, which helps systems built using Erlang/OTP achieve nine nines of uptime. Instead, it includes null references, also known as Hoare's Billion Dollar Mistake.
But it's not enough to scorn the industry; Go's designers also look down their noses at academia.
A modern, ML-derived static type system? Generics, which would enable the unwashed mashes to write their own `append`? Ain't nobody got time for that — wait, what? Oh, Rust does?
Go's tooling is fantastic, and its pragmatism is commendable, but ignoring the last 30 years of programming language research is not.
Erlang and Go are very different programming languages. They have a similar form of message passing, but the semantics are very different. Go's concurrency model still exists within a single process with a shared heap, which is very different to Erlang's model.
> it includes null references, also known as Hoare's Billion Dollar Mistake.
The very notion of null references being inherently bad is highly contentious. Just because someone gave an idea a catchy name, doesn't make it true.
> Ain't nobody got time for that — wait, what? Oh, Rust does?
Go and Rust have very different design goals. If you think Rust is more appropriate for you, use Rust.
> ignoring the last 30 years of programming language research is not.
This is a common refrain from Go haters, but it's really not true. Just because you don't do everything everyone else is doing doesn't mean you have ignored them. Go includes the language features we felt were necessary in a productive language.
Your comment frames Go as this language that thumbs its nose at a lot of good ideas, but the reality is that every design decision has tradeoffs. If you don't have null pointers or generics, then you need a more complex type system. The Go designers may not have made all the decisions _you_ would have made, but they were not made in ignorance of the alternatives.
And this leads back to the message of the original article: Go encourages the programmer to write simple code that is obviously correct. A more complex type system would detract from that goal.
> Go's concurrency model still exists within a single process with a shared heap, which is very different to Erlang's model.
I've already mentioned Go doesn't have per-process heaps, and I consider this a step backwards from shared-nothing message passing.
I don't understand how can you argue that facilitating the threads-and-locks model leads to "simple code that is obviously correct". Writing correct multithreaded code is widely recognized to be difficult.
> The very notion of null references being inherently bad is highly contentious.
If by this you mean people keep[1] pointing[2] out[3] the problems with nulls, and other people keep ignoring them, then I fully agree. Otherwise, please direct me to a discussion offering a pro-null argument.
For code to be correct in the presence of null references, the programmer must remember to check every nullable value of uncertain status, without support from the compiler.
Alternatively, in a language with option types, only values explicitly marked as optional must be checked, and forgotten checks are pointed out by the compiler. Less code, more safety.
Tell me again, how does allowing null references lead to "simple code that is obviously correct"?
> If you don't have null pointers or [do have] generics, then you need a more complex type system.
Certainly. More complexity for the language implementers; less complexity for the language users.
> For code to be correct in the presence of null references, the programmer must remember to check every nullable value of uncertain status, without support from the compiler.
If you don't have null pointers then you need to push the responsibility for checking for initialization somewhere. You seem to be advocating that place is in the type system. That's fine, but you can't say it wouldn't make the type system more complex.
I see more Go code than probably anyone in the world, and I don't see anyone suffering from the scourge of null pointers, however bad people say they are. I think in certain contexts they can be problematic (SQL is a good example), but the concept of "nil" in Go is quite useful and easy to understand. For instance, unlike C++ it is quite valid to call a method on a nil receiver.
> Tell me again, how does allowing null references lead to "simple code that is obviously correct"?
It's a tradeoff. You get a simpler type system in exchange for null pointers.
> More complexity for the language implementers; less complexity for the language users.
No, not just for the implementers. For the users too.
> No, not just for the implementers. For the users too.
You haven't bothered to address option types. This sort of unqualified assertion may lead people to believe Go's questionable design decisions were, in fact, made out of ignorance.
A language with null references is exactly like a language with option types, where every reference value is by default optional.
Removing this default does not increase complexity for the user. Conversely, it allows the user to write less code and enjoy more safety, which is clearly a complexity decrease.
The main Go authors come from a C background, and they despise C++ and Java. It seems that those languages are their main experience, and hence, Go does not look too different from them.
Go was originally presented as a "systems" programming language, and as a C++ replacement. It failed to attract C++ and other systems programmers, while attracting Python and Ruby coders. In retrospect, it seems to make sense.
I suspect the reason adding generics is hard in Go is a due to its weird composition system (they loathe object hierarchies too, remember?)
I concur, that we should be using and developing languages that have strong, expressive type systems. Furthermore, these languages are not "complex" to use or understand. I really admire what the Rust people are doing. They are writing a browser engine while developing the language, so they can continually evaluate their design choices, and if needed, change things as they go.
Go looks like it has been designed into a corner, but it is too late to back up now. The main driving force it has is that it has Google as a brand name behind it, unfortunately. Otherwise, it most likely wouldn't have gone anywhere.
> Go looks like it has been designed into a corner, but it is too late to back up now.
Not really. We had years of experience with Go—and made many changes—before we stabilized the spec with Go 1. We are happy with the major decisions we have made so far, as are a lot of Go users. Of course there are minor things we would change if given the chance today, but the basics of the type system and the presence of nil pointers are not among them.
The reason generics is hard is because we want to find a way of doing it without losing the feel of the language we have right now (which we quite like, thanks).
> A language with null references is exactly like a language with option types, where every reference value is by default optional.
Yes.
> Removing this default does not increase complexity for the user.
Yes, it does. They need to be aware of the semantics of the option mechanism, how to specify whether a type is optional, and when that is appropriate. They must also perform checks when converting values from optional to non-optional types.
In Go the choice is made for you: reference types may be nil.
> They need to be aware of the semantics of the option mechanism, how to specify whether a type is optional, and when that is appropriate.
Being aware of the semantics of the option mechanism is exactly like being aware of the semantics of null references. You've just agreed one is like the other with a default.
Specifying whether a value is optional is less complex than the usual way of specifying whether a reference can be null — a documentation comment. Comments aren't usually checked by the compiler.
Knowing whether it's appropriate for a value to be optional is exactly like knowing whether it's appropriate for a reference to be null. However, once a value is determined to exist, subsequent code can assume the value is non-optional, avoiding subsequent checks with no decrease in safety. Less complexity.
> They must also perform checks when converting values from non-optional to optional types.
Surely you mean the other way around?
Checking whether an optional value exists is exactly like checking whether a reference is not null. However, fewer checks are necessary. Less complexity.
> Specifying whether a value is optional is less complex than the usual way of specifying whether a reference can be null — a documentation comment.
Is it? I agree there's definite virtue in having this property checked by the compiler, but I think you and I will have to disagree on the notion of complexity here.
The beauty of Option types is that they are mostly used as return types. Most function arguments are non-nullable pointers or references; only ones that might be None are passed as options. This makes reasoning about code much easier. I just look at the function signature, and I know the types of arguments it expects. I also don't have to litter the beginning of each function call with if !null checks to each reference parameter.
> They must also perform checks when converting values from optional to non-optional types.
There are multiple ways to manipulate option type variables. The following comment from a Redditor sums it up very nicely:
"What is interesting is that (coming from scala and haskell) you almost never pattern-match on option types. The power of option, imho, is that you don't have to care whether it is Some or None, you write the same code regardless. If you are pattern-matching the whole time, you haven't gained all that much over checking for nil/null.
Option is a container, and we can use my_var.chain(...) and my_var.map(...) to update the things in the container. And the joy of it is that these methods will automatically do the right thing, with repect to Some/None, so you can string a bunch of these calls together, and if my_var is Some(...) is will apply all of the specified functions. If it is None, it will remain None."
What I get from your comment is that option types work well in Haskell and Scala. This doesn't surprise me, as both languages places a strong emphasis on the type system, and so they are easily supported.
For option types to work well in Go you would need to add more support to the type system. But we don't want a more complex type system. That's a tradeoff we made.
Again, I'm not really sure why people keep getting bent out of shape about this. Haskell, Scala, and Rust exist for people who want to write that kind of code. I prefer to write Go code.
> What I get from your comment is that option types work well in Haskell and Scala. This doesn't surprise me, as both languages places a strong emphasis on the type system, and so they are easily supported.
Option types need no type system support beyond having option types at all. Think of an option type as a container. It's either empty, or it has a single element inside it.
You can manipulate your container by checking if it's empty, getting the value out, manipulating the value and putting the value in a new collection. That's the basic low-level primitives of interacting with an option type.
And you can also use higher-order operations, e.g. map over the collection which will only "do things" if there's a value and will just return an empty collection if there isn't. These are the higher-level combinators on option types. They require no type system complexity outside of having option types in the first place.
Yes. Also, these low-level primitives already exist in Go, as support for pointers, or references. The missing bit is requiring references to be explicitly annotated as nullable or non-nullable.
I disagree with you on the relative complexity of type systems, and, as someone passionate about my craft, I despair Go is merely good not great, due to what appears to be uninformed design decisions.
You may prefer to write in Go, but you don't work in a vacuum. Your creation is out there, gaining mindshare, and propagating mistakes made half a century ago. As a language designer, you have the power to shape human thought for years to come, and the responsibility to remain intellectually honest. This is the meaning of Hoare's apology. He didn't know better. You have no such excuse.
I can't speak about the others. However, I feel that we have reached an unfortunate state in the industry when a sub-par technology starts to get picked up because of brand names, and not on inherent merits.
Null references do make reading code more complex for the user since every function call must be prefaced with a null check. If I make a function call in every line of my code adding all the null checks would easily double my line count, burying my code in Java-like verbosity (ok not that much but you get the idea). I suppose a good IDE could help with this if option types remain unavailable.
Another common complaint with null references is that there is no compile-time type safety that you get with option types. The go team could mitigate this by having the compiler check for possible null references that may not have been caught and emit warnings or refuse to compile.
You're not going to check that s != nil before calling s.Foo, even though s is probably a *server.Server and could be nil. But it won't be, if the New function is correct. Yes, the compiler doesn't guarantee that New gives you a non-nil pointer, but it also doesn't guarantee a lot of things about the behavior of the program.
I was mainly referring to function arguments. Any function that takes a pointer type has to have null checks at the beginning, since any of the pointer arguments may be null.
"has to" is a bit strong. The standard assumption is that pointer arguments should be set (or the function is documented otherwise), and the responsibility is left with the caller. It just bubbles out from there.
Yes, this is the way things have been done since 1965. It sounds simple enough, but as it turns out, in complex systems, this is a common cause of defects.
Programmers forget to check things all the time. Assumptions made in different areas of the code, by different programmers, at different times, do not always hold.
Fortunately, computers are good at remembering to check things. We just need to allow them to do so.
My info came from this thread https://news.ycombinator.com/item?id=5792842 which, now that I think about it, appears to be a bunch of second-hand info about the golang mailing lists. rsc posts here about golang sometimes but I don't think he's mentioned generics specifically.
I recall the launch documentation specifically saying generics are "... a work in progress". Ignoring academic progress is easy when programming is far more often a trade than the application of computer science. I once saw a video comparing Go to an unnamed which matched or surpassed Go in all the ways the video compared the two. The language was Algol 60. Naturally, can't find the video now.
Is your criticism of Go based on actual experience using it, or simply the fact that it lacks certain features that you deem critical in a modern programming language?
I think this is one of the really great things about Go. Some detractors ask "There is so much to be gained by having generics! What would be lost if Go had generics? Nothing! Ergo, Go is wrong to not have generics".
This line of argument is flawed. Go curbs the ability of people to write crazy (and ultimately mentally expensive) abstractions. The lack of generics in Go is very a good thing. Especially when it comes to collaborating on a large unfamiliar code base.
I have been productive in Go and haven't missed the ability to write generic code so far. Usually it turns out there is another way to achieve what you want without them. If there isn't, maybe I'm trying to solve the wrong problem, or it is the wrong language for the task.
If anything, generics reduce the mental expense of reading code, because if a container has a generic value type I know that it isn't going to be doing something interesting to those values behind the scenes.
Generics may not be the correct solution, but I would really like a way to make collections which are general enough to be used by any datatype. As of now, I cannot do that without doing the typical `interface{}` approach.
The built-in data structures + channels are able to handle this, but if I just want a set of elements, what do I do? Make a `map[foo] bool`? If I see such a piece of code in an unfamiliar code base, I have no idea whether to test for key existence or whether I have to test whether the bool is true. A generic set would leave me puzzled and type unsafe[1]: What types could possibly exist in this set? A set of foos is not that hard to comprehend, and is in fact very much more straightforward to understand than a generic set, which in unfamiliar code may contain anything, or a mapping from foos to bools.
[1]: Rob Pike mentions in http://www.youtube.com/watch?feature=player_detailpage&v... that type safety is of high importance for Golang, but how does one achieve that if all the different datatypes I implement/need use `interface {}` where I have to cast all values afterwards? That seems very type unsafe, from where I stand.
Your puzzling "generic set" isn't what is usually meant by a generic set in a statically typed language.
A generic set would be defined disregarding the type of its elements: "set of ?s". However, a single instance of a generic set would only be allowed to contain members of a single type: "set of foos". This is called parametric polymorphism.
That's only true in a language with type inference and type aliases.
In Java, generics make your code completely unreadable, as content is buried in boilerplate.
Map<Converter<Comment,Post>,ReadyState<CommentEnum> ops;
for (Entry<Converter<Comment,Post>,ReadyState<CommentEnum> e: ops.entrySet()) { ...
In Go, generics would be an unqualified win for the author/reader. No one disputes that. The only dispute is over the compiler and runtime costs.
Generics (outside of C++'s templates which are a pretty odd duck[1]) generate very little compiler costs and little to no runtime costs (depending whether they're erased or reified), unless you decide to (optionally) trade some added compiler costs for some runtime improvements by generating type-specialized collections behind the scenes.
[1] and their cost is seriously compounded by the rest of C++, D has C++-style generics with a far lower compilation cost
Depends what you're comparing against. If it's a poor man's generic implementation in current Go using interface{}, then obviously there would be no extra runtime overhead if the compiler generated that for you in a type safe manner; but compared to manually specialized collections or the built-in generic collections, there is quite a lot of runtime overhead. Whether the compiler costs are manageable or not is up for debate.
The built-in collections can remain built-in if that gets you off. Same deal for manually specialized collections, except those aren't generic, so they remain not generic.
I think those are both rather bad solutions; I want generics precisely because they can provide equal performance to the built-in collections without the obvious drawbacks of a limited set of primitives or manually copying code. I'm just saying that "very little compiler costs and little to no runtime costs" is somewhat misleading, because there are significant runtime costs compared to those alternatives; Go needs code generation based generics, which are certainly doable, but incur significant complexity and runtime design issues.
> I want generics precisely because they can provide equal performance to the built-in collections
That is most definitely not the core use case for generics. They can be lifted into type-specialized collections, but the core point of generics is type safety.
Fair enough. Although my priorities somewhat differ from the norm, the document you criticized elsewhere does cite runtime overhead as a significant obstacle.
> the document you criticized elsewhere does cite runtime overhead as a significant obstacle.
And as I noted in my criticism it is completely wrong in doing so: the runtime overhead it notes is boxing in Java, ascribing it to generics.
But boxing is not a property of java's generics, it's a property of java's collections, java collections have required boxing value types since the first release (~1996), generics were only introduced in 2004. The boxing in java collections is orthogonal to generics.
In fact, one of the properties of C#'s reified generics is that it allows the compiler and runtime to avoid object (boxed) collections when it can use unboxed collections for value types.
True. The author is smart, and I would like to believe that he knew that reified unboxed collections would solve the space problem, but possibly be even worse than Java in time, since there is no JIT to inline repeated virtual calls to the same target. I can't guarantee that he did. ;p
Furthermore, other languages seem to have got this right without the fabled runtime/compiler costs that the Go authors seem to talk about (see D for instance). What's funny is that there are runtime costs today in Go that could have been resolved by using generics. All interface calls are virtual calls, and cannot be inlined.
> No one disputes that.
Citation needed. I read and heard from many people that do.
I can't cite my claim that "no one" does something. I'm happy to see any example you provide of the Go team/nuts saying that generics are a bad language idea.
C++ templates aren't generic (or more specifically, aren't parametric) for a few reasons, one of which being that they contain a hidden overloading mechanism.
I have to say, I love some of the principles of Haskell. I am a novice though and I currently don't stand a chance of picking up and hacking on a project without serious amount studying.
However, with a cursory glance at Go, I was able to be productive in it and hack on the go compiler and runtime itself.
Maybe this is more a statement about how little I know about functional languages or some of the theoretical techniques used in Haskell.
Whilst my mortal brain comes to grips with Haskell I am being quite productive in Go. I have learned a lot on the way, too.
Whether by design or accident, Go is filling the use case of "you tried Python but performance wasn't good enough". People will bring up cases like "you can't write a generic map function, or tree data structures!", but Go isn't the language for those things. In Go, you would just use a for loop or a dictionary-based map. If your program gets to the point where that no longer cuts it, then it's time to move on to a different language.
It seems to have been by accident, as they were originally designing it to be a "systems" programming language to take on C++. What eventually happened is that most(?) Go programmers came from Python, Ruby, and other scripting languages.
> In Go, you would just use a for loop or a dictionary-based map. If your program gets to the point where that no longer cuts it, then it's time to move on to a different language.
Really? I thought the reason Go didn't have generics is because they couldn't decide on an implementation that met their goals? That is, was quick and didn't emit a ton of code.
Russ Cox, one of programmers working on Go, described the dilemma as: "do you want slow programmers, slow compilers and bloated binaries, or slow execution times?"
My god, this is such completely dishonest bullshit strawman I'm impressed he had the balls to post that garbage. Java's boxing does not come from its generics, it's the other way around (java's non-Array collections have never been able to hold unboxed values), and C++'s template are pretty much unique in their complexity (and definitely not what PL people think about when talking about generics) and compounded by all the rest of C++'s baggage (D has C++-style templates and is nowhere as slow to compile).
And his reaction to comments (that is, his complete failure to acknowledge how huge a strawman his original post is, and complete absence of response to any mention of C#, Eiffel, Ada, Haskell, MLs and others)... that is supposed to be a defense of goteam's thought process?
I'm super pro generics and love me some Haskell and ML. However, generics really do have a cost. Since the size of the objects in a generic container can vary, you either need to take the ML route of making everything a pointer (which add runtime cost) or the C++ (yeah, not really generics, but still) route of duplicating the code for every size object you store in the container. While I think Go made the wrong choice with generics, I understand their thought process.
I'm curious what the cost would be of implementing generics by having an (implicit) parameter for the element size. It would certainly be a bit slower than the fully specialized version, but it avoids boxing and doesn't duplicate code.
Unfortunately, that is their thought process. I find it hard to believe Rob, Russ, etc actually genuinely have never heard of ML, but everything they ever say on the issue suggests that is the case. And when people keep showing them "hey look, this problem was solved 30 years ago!" they just outright ignore them and don't acknowledge it.
>I have been productive in Go and haven't missed the ability to write generic code so far. Usually it turns out there is another way to achieve what you want without them.
Yes there is. Involving more code, less DRY, or loss of type safety.
I don't think you intended that as a preface to your flawed argument, but it worked out well. No, the lack of parametric polymorphism is not a good thing. You are losing simplicity, not gaining it.
> The channel is the analog to the Unix shell’s | character.
This sentence was italicized, and rightfully so. I've written a few small Go programs, but I haven't run into a problem where I thought, "A channel is definitely the right solution here." Yet I love and feel quite comfortable with shuffling data through big shell pipelines. Perhaps I'll think of a channel next time I'm reaching for a pipeline.
I wanted to say this but couldn't find the right words. The author manages to do it. This is something I wrote: https://github.com/adnaan/hamster. The whole experience of programming in Go is like moving from point A to point B in a straight line.
Consider this anecdote: I recently put together a DIY 3d printer. I didn't have the right tools to cut steel bar, so I made a clever jig and used a shitty Dremel without enough clearance. Clever? Yes -- it got the job done with limited resources. Smart? No -- dumb, actually. Smart would have been driving 15 minutes to Harbor Freight to buy a $20 cut-off saw.
Perhaps I'm reading this wrong, but if I'm not, it seems to be making a great point but missing a far greater one.
This is the more important lesson.
Replace clever code with unremarkable code.
Go has nothing to do with it. PERL has nothing to do with it. Switching languages most certainly has nothing to do with it. Keeping yourself in check and stepping back from your problems to understand their fundamentals, then looking for more appropriate and elegant methods of dealing with them in any language is what does this.
go func(){
for {
select {
case <-time.After(time.Second):
log.Println("timeout")
case <-stop:
return
case val, channelOpen := <-inputs:
if !channelOpen {
return
}
outputs <- process(val)
}
}
}()
Nice. I still haven't given Go much time, but I really like this form, where you can switch on multiple blocking calls.
I've definitely wanted this in other languages where I'm using thread safe queues for communication, and had to do manual multiplexing into a new queue whenever I wanted to do a blocking get on multiple queues.
Does anyone know of a convenient way to do this when handling multiple instances of python's Queue or similar with Haskell's Chan?
Thanks for posting this. I found something similar but on first blush didn't think it was quite right as I was mistakenly thinking that you could only do this for TChans with the same type. Looks like it might be just what I'm after! When I think about it, STM seems like it makes sense here, since you probably want the ability to cancel/rollback the blocking gets that didn't return.
EDIT: Actually, this isn't right. The example linked does "if get from channel 1 fails, get from channel 2". I'm looking for "select" on multiple channels, where the result is based on the first channel to return.
EDIT again: Apparently I can't read, and this does achieve what I'm looking for, I just have no basic experience with STM. Awesome, will try this later!
njs12345 answered for Haskell, I don't remember seeing a way to do this in Python though depending on the situation you could just use a single queue for everything of course.
FWIW there's no primitive letting you wait on multiple conditions in threading, and Queue is implemented in terms of a pair of conditions.
I got that through TinEye, and then through comments left on various places. I think with a bit more noodling I would have got a name, but HN user Wooster got there 15 minutes ago.
This is the kind of thing that can be hard to get an answer to using web search engines.
Perhaps the Google search a day could be expanded to include "tricky" and "hard" levels?
It's tough to give examples of this in Perl without a lot of narrative (and sometimes a lot of code), but the technique I'm referring to is currying: programs that write programs, via functions that write functions. This, of course, is what you do all the time in LISP. It takes half the (Higher-Order Perl) book to illustrate the technique and its power. Which is kind of the point: shouldn't it be a commonplace thing that doesn't take so much work to get around to explaining and using?
> shouldn't it be a commonplace thing that doesn't take so much work to get around to explaining and using?
Why? It's a reality of the world that more complex things take more time and more effort to learn. However, often that is because these more complex things allow you to do a lot of very useful things much more efficiently. I would prefer to drive over a bridge built by an engineer who learned all those difficult equations, material properties, and buildings codes as opposed to a high school kid with a few physics courses under his belt.
Programming is similar in some effects. As you get better and better you acquire more and more tools to do what needs to be done. Now granted, if you are working on an interface that needs to be easily accessible to the widest range of people it makes sense to simplify. However, cleverness has it's place in code that is expected to be read by specialists.
In the end, even if you avoid all the clever tricks and shortcuts you know, a large enough project will still be utterly inaccessible to a novice. The real challenge of projects that complex becomes less about the specific detail of how a piece works, but more about how all the pieces work together. If you're skilled enough to follow the design of a project like that, I don't think it's too much to ask that you either know these "clever" techniques, or you should be willing to learn.
Looking at your code you linked in the article, I think part of the problem is the fact that there are entire pages of code without a single inline comment. When you're doing these clever things you really need to document every logical step in order to understand and verify your through process later on. You also have to be ready to accept that sometimes you will mess up in your cleverness. In fact, If you are getting a lot edge cases that's a good signal to go back, re-read your comments/design notes, and find where you could improve your approach.
Ironically, I would argue that go channels are actually an example of doing something "clever" the correct way. These channels are very effective at separating a single concept from a whole pile of abstractions, and doing a lot of clever interactions beneath the hood in order to ensure it's all effectively synchronized. In other words, using go channels is using the same type of "clever" techniques once they've been abstracted away.
" I would prefer
to drive over a bridge built by an engineer who
learned all those difficult equations, material
properties, and buildings codes as opposed to a
high school kid with a few physics courses under
his belt."
I think this kind of analogy is misleading. Those things are more like the equivalent of understanding data structures and algorithms, performance estimation, being able to use a profiler effectively etc. Civil eng is very conservative in terms of the kinds of language and graphics that can be used to express a design. Anyone doing the equivalent of currying or macros (making up ones own language) would be thrown out. I would think its probable that when programming is as old as engineering its modes of expression will be similarly limited/standardised.
> Civil eng is very conservative in terms of the kinds of language and graphics that can be used to express a design.
I would argue that programming is far more specific in terms of the kind of language can be used too. In fact each such language tends to be described in exhaustive specs.
> Anyone doing the equivalent of currying or macros (making up ones own language) would be thrown out. I would think its probable that when programming is as old as engineering its modes of expression will be similarly limited/standardised.
Both things are very broad when it comes to what can be made using those languages. A civil engineer may use his language to build a house, a sky-rise, and a nuclear power plant. Each of those will have different complexities, and a different requirements of knowledge and qualifications. In fact I imagine the Engineer working on the latter will know how to do a lot of things that the Engineer who works on the former would consider to be akin in complexity to currying and macros.
The situation is the same in programming. Some people may be working on projects currying, macros, and other techniques are a major benefit. These are after all extremely powerful tools. Just like with the Civil Engineer, the challenge is knowing how to use them properly.
I've seen it many times before, and it's always been amusing, but looking at it within the context of "clever code" is brilliant. That's exactly what it looks like some people have done (figuratively) to produce the code that they did.
It's so strange to me that people describe things like higher order functions and map/filter/reduce as being clever / complicated and think manual iteration and indexing into an array is "simple".
I hate to keep linking to this talk because I don't want to look like too much of a clojure fanboy, but I think a lot of people would benefit from re-examining their definition of simple: http://www.infoq.com/presentations/Simple-Made-Easy
This article didn't make much sense to me. In particular, what was it about the problem that prevents having a "stage" object with a "do_stage" method, which takes the input object and returns the output object (or some error code etc.).
I feel like the answer has to do with concurrency but the description in the article was to vague to get a more precise idea.
Programmers get very attached to their favorite languages, and I don't want to hurt anyone's feelings, so to explain this point I'm going to use a hypothetical language called Blub. Blub falls right in the middle of the abstractness continuum. It is not the most powerful language, but it is more powerful than Cobol or machine language.
And in fact, our hypothetical Blub programmer wouldn't use either of them. Of course he wouldn't program in machine language. That's what compilers are for. And as for Cobol, he doesn't know how anyone can get anything done with it. It doesn't even have x (Blub feature of your choice).
As long as our hypothetical Blub programmer is looking down the power continuum, he knows he's looking down. Languages less powerful than Blub are obviously less powerful, because they're missing some feature he's used to. But when our hypothetical Blub programmer looks in the other direction, up the power continuum, he doesn't realize he's looking up. What he sees are merely weird languages. He probably considers them about equivalent in power to Blub, but with all this other hairy stuff thrown in as well. Blub is good enough for him, because he thinks in Blub.
When we switch to the point of view of a programmer using any of the languages higher up the power continuum, however, we find that he in turn looks down upon Blub. How can you get anything done in Blub? It doesn't even have y.
By induction, the only programmers in a position to see all the differences in power between the various languages are those who understand the most powerful one. (This is probably what Eric Raymond meant about Lisp making you a better programmer.) You can't trust the opinions of the others, because of the Blub paradox: they're satisfied with whatever language they happen to use, because it dictates the way they think about programs.
I think this basically nails the attitude of every single go convert that I have spoken to. I am starting to get tired of hearing lines like "that lisp stuff is too complicated, I am a go programmer" or "why would you do x in y way, it's so much better in go".
I don't think I have ever seen such unrelenting fanboyism in a programming language. It very well may be a fantastic language but its not an end all be all, we would do well to remember that.
> I don't think I have ever seen such unrelenting fanboyism in a programming language.
It's died down, but discussions involving C# used to be rough, take pretty much any PL feature added since 1.0 (generics, lambdas, local type inference, etc...) and any time they were suggested before their official blessing and addition by Microsoft would be met with charges of useless fancy-pants PL wankery of no use to Real Programmers in the Real World who were Very Productive and this was Abstract Useless Stuff for CS Undegrads Who Didn't Work In The Real World.
I think it's mostly inevitable, to me it's no different than asking group of farmers which make of pickup or tractor is best, or a deliveryman which brand of shoes are best... I'm pretty sure you could come up with at least one analogue for most any vocation.
In programming, we have: language, practice (formatting, documentation, framework, etc.), methodology (testing, planning), development environment (integrated or otherwise), etc. and each of these is a lightning rod for disagreement. I think it just seems to be more prevalent for software engineers due to the fact that a majority of our discourse occurs online.
I'm a Go convert. And I have learned and understood the concepts of a lot of other programming languages before, including Haskell, Scala, Rust, C++, Scheme, Common Lisp, Smalltalk, Ruby, Python, Perl, PHP, Lua, JavaScript, Groovy, CoffeeScript, Dart, Java, C#, Objective C, C, Pascal, Basic. I understand higher-order functions, currying and sectioning, parametric polymorphism, monads (including the option/maybe monad), product and sum types, pattern matching, multiple dispatch, meta-programming etc. Yes, Go lacks most of these, it's a "dull", imperative programming language without parametric polymorphism or an ML-style type system (which is not "modern", btw, it's from the early 70s just like C) and with nil pointers. Nonetheless, Go is the most enjoyable programming language I've ever used. It strikes the right balance between features and their cost. I'm tired of being told that I'm just too stupid to see the light.
I'm not sure what your point is, whether you think go is simple and powerful like lisp, or the new java, but simply quoting "The Blub Paradox" makes you sound pretentious (on a site where the largest percentage of readers are probably already familiar with said essay nonetheless).
I don't think this is really applicable to the article anyway, since it's an article commenting on two very different languages and programming styles, not about being satisfied with a Blub.
I would agree, and add that simply referencing the article, or quoting relevant portions, along with introducing some original narrative would've been much more productive (conducive to conversation).
>I'm not sure what your point is, whether you think go is simple and powerful like lisp, or the new java
Really? I find it hard to believe you can't tell what is intended. It certainly is applicable to the article, as the article can be summarized as "I've moved up a language on the blub line, and now my old perfect language is clearly inferior, but my new blub is perfection incarnate".
> One of the things I like about Go is that it offers much better ways to write things that you’d usually write with callbacks.
I fail to see how saying that one likes replacing callbacks with channels translates to saying Go is "perfection incarnate".
Go isn't perfect, but they hit a really good subset of features that a lot of people find very productive. I, and many others that use go, find that pragmatism worth the tradeoff.
Go makes what my co-workers and I do easier than other languages we've tried. Period. That's why we're using it. That's the reason other's I've communicated with have given as well. It's not a magical unicorn, it's not going to impress PL theorists; it's going to get work done.
The lack of generics, or parametric polymorphism, or immutable data structures, or LINQ syntax, pattern matching, etc., isn't keeping us from shipping code.
>I fail to see how saying that one likes replacing callbacks with channels translates to saying Go is "perfection incarnate".
It isn't. It is saying "I don't realize that this is actually worse than what lots of languages provided before go even existed, so I think it is awesome". Which is the point of the blub response. Go isn't interesting, it is blub.
It is also much better for the team and the company. Beginning musicians are just trying to play. Intermediate musicians are try to play as fancy as they can. Master musicians play what's needed by the tune.
It takes somebody who's past all the ego issues to devote their intelligence to making code look simple. It takes great intelligence to solve a complex problem in a way that looks simple and straightforward.