Since they cite me (and my essay from 2014) as part of their decision making process, I want to throw in 2 cents here. They ended up deciding on Go, whereas I have ended up preferring Clojure, yet I agree with a lot of what they say, so I'll try to clarify why I ended up with a different decision than what they made.
I understand what they mean when they write:
I think the first time I appreciated the positive aspects of having a strong type system was with Scala. Personally, coming from a myriad of PHP silent errors and whimsical behavior, it felt quite empowering to have the confidence that, supported by type-checking and a few well-thought-out tests, my code was doing what it was meant to.
There are times when I appreciate the strict type-checking that happens in Java. I do get what they mean. But there are also a lot of times when I hate strict type-checking (in particular, when dealing with anything outside of the control of my code, such as whimsical, changing 3rd party APIs that I have to consume (for some business reason), or even 1st party APIs that feel like 3rd party APIs because they are developed by another team (within the same company) or for some reason we can not fix the broken aspects of some old API that was developed in-house 6 years ago.) Because of this, I have become a proponent of gradual typing. If I am facing a problem that I have never faced before, I like to start off without any types in my code, and then, as I understand the problem more, I like to add in more contract-enforcement. This is what I attempted to communicate in my essay "How ignorant am I, and how do I formally specify that in my code?" [1]
I think everyone who works with Clojure sometimes misses strict type-checking. Because of this, there have been several efforts to offer interesting hybrid approaches that attempt to offer the best of all worlds. There is Typed Clojure for those who want gradual typing, and there is more recently Spec. Given what I've written, you might think I am a huge fan of Typed Clojure, but I've actually never used it for anything serious. The annotations are a little bit heavy. I might use it in the future, but for now, I am most excited about Spec, which I think introduces some new ideas that are both exciting for Clojure, and which I think will eventually influence other languages as well.
Do watch the video "Agility & Robustness: Clojure spec" by Stuart Halloway. [2]
I also sort of understand what they mean when they write this:
No map, no flatMap, no fold, no generics, no inheritance… Do we miss them?
There are times when we all crave simple code. Many times I have had to re-write someone else's code, and this can be a very painful experience. There are many ways that other programmers (everyone who is not us, and who doesn't do things exactly like we do) can go wrong, from style issues such as bad variable names to deeper coding issues such as overuse of Patterns or using complex algorithms when a simple one would do. I get that.
All the same, I want to be productive. And to be productive in 2017 means relying on other people's code. And, in particular, it means being able to reliably rely on other people's code -- using other people's code should not be a painful experience. Therefore, for me, in 2017, one of the most important issues in programming is composability. How easy is it for me to compose your code with my code? That is a complex issue, but in general, those languages that allow for high levels of meta programming allow for high levels of composability. Both Ruby and Javascript and Clojure do well in this regard, though Ruby and Javascript both have some gotchas that I'd rather avoid. In all 3 languages, I find myself relying on lots of 3rd party libraries. I use mountains of other people's code. Most of the time, this is fairly painless. But there are some occasionally painful situations. With Ruby I run the risk that someone's monkeypatching will sabotage my work in ways so mysterious that it can take me a week to find the problem. And Javascript sometimes has the same problem when 3rd parties add things to prototype, perhaps using a name that I am also using. I so far have had an almost miraculous time using Clojure libraries without facing any problems from them. It's this issue of composability that makes me wary of Go. While I sometimes crave a language that simple, I can't bring myself to give up so much of modern languages best features.
> If I am facing a problem that I have never faced before, I like to start off without any types in my code, and then, as I understand the problem more, I like to add in more contract-enforcement.
This seems to be a widespread sentiment.
In practice, I've found that prototyping anything remotely complex without types is so painful that I'd rather settle for an inferior design than trying to come up with the best possible abstraction.
In Haskell, I come up with a coherent skeleton without having to implement mundane details, hit a wall in the design space because of some case I didn't think of, come up with a better idea and then go back to the code and refactor with confidence. The type system always guarantees that my prototype is coherent as a whole. And I can do that dozens of times.
In Clojure, even with spec, I'd have to implement all my functions fully before being even able to test the design as a whole (sure, testing individual functions works fine in the REPL.) And after hitting a wall, having to reimplement everything every time is just too much work.
> In practice, I've found that prototyping anything remotely complex without types is so painful
Just to offer a counter-point, I have a Clojure project here with 3k LOC, without using spec/schema. All I have is 700 LOC tests. The tests enforce semantic meaning, along with (some) contracts. I miss types from time to time, but it is no where near as bad as you mention. Against me is the fact is that my app is mostly self-contained, and written all by me. I am fairly certain the project would be atleast 30k LOC if I wrote it in Java.
I think value of types only comes into being when there are many people working on a single code base. Repl/tests/integration tests will take one a long way before it reaches its limits.
I don't buy the argument that types are useful for large codebases, because
1. Types don't enforce meaning. Meaning is far complex than type. Haskell style types only work as far as they enforce meaning.
2. Types have limits too. More accurately, humans have limits. Working with large codebases where there are 1000s of types is hard; which it shouldn't be because that was type systems selling point all along.
Cleaner abstractions of code, separated by strongly enforced protocols(types) is the way to go, I think.
Have you tried what I'm talking about, i.e. prototyping something by specifying the types and iterating on the core design first before implementing large chunks of the required functionality?
It's probably hard to see the benefits without having tried it first.
This [0] is one of the bigger Clojure projects I've done (around 2.5kloc), also without spec/schema, and I really didn't dare refactor much, even when having a clear understanding of how things could be done better. It was just too much work.
With types I'd go through 10 design iterations before settling for something I'm satisfied it, and even halfway through the project changing things radically isn't a problem (I've worked on a 50+ kloc Haskell backend service, and changing core data structures used pervasively throughout the codebase was a 10min job, literally.)
I see what you are trying to say. You are saying Clojure is not the right tool to do top-down design. I agree with it. It is however a very good tool to do bottom up design.
See https://www.youtube.com/watch?v=Tb823aqgX_0
Today I caught a bug in a macro-expanding code walker, where it was expanding the wrong form. The syntax being walked is (foo-special-operator x y z . rest). x and z are ordinary forms that need to be expanded; y is a destructuring pattern (irrelevant here). The walker was expanding z in the place of x: that is to say, expanding z twice, and using that as the expansion of both x and of z. That's simply due to a typo
referring to the wrong variable.
The type system would be of donkey all use here, because everything has the correct type, with or without the mistake. The code referred to the wrong thing of exactly the right type.
A lot of code works with numerous variables or object elements that are all of the same type, and can be mixed up without a diagnostic.
Types don't enforce all meaning ie., the enforcement of contracts through types go only as far as they mean something to the problem you are applying it to. It does not cover all the complexities of the problem, or the way the code is changed in the future.
EDIT: This is also why it is easy (and nice) to implement parsers in strictly typed functional languages, because parsers are well studied theoretically. The problems in the real world are not studied well enough for contracts enforced via types to work completely.
Types don't enforce meaning, types are a tool I use to enforce consistency of meaning along certain important dimensions. I actually find that even more important when the situation is messy, because I'm likely to initially mischaracterize some aspects of it initially and when I go to change things it's very useful to be told what's now inconsistent.
I get what you are saying. What I'm trying to say is typing takes too much from me, in terms of complexity over-head, that I'm better without. I found this is true in practice now. As I said before, I write tests to do what you say types do - for me, that is enforcing meaning. Types do allow for easy refactoring, and I think that is weakness for untyped languages.
> The above function enforces that getting a user can fail and you must contact the outside world to get a user.
That seems sort of backwards. It enforces that the caller be able to handle failure (and similar for IO). It may well be that "getting a user" doesn't do either (e.g. `pure (pure defaultUser)`)
> In Haskell, I come up with a coherent skeleton without having to implement mundane details, hit a wall in the design space because of some case I didn't think of, come up with a better idea and then go back to the code and refactor with confidence. The type system always guarantees that my prototype is coherent as a whole. And I can do that dozens of times.
Could you go into a bit of detail about this approach? Do you mock functions out and just specify type definitions, and fill in functionality as you go?
Note that src/Hotep.hs exports a bunch of undefined functions. I've been able to verify that these types all make sense even without implementing a thing. As time goes on I may learn that the implementation drives the types somewhere else and then the compiler will make refactoring easy.
However, already I've gone through about 5 iterations of this design which drove me to debug some structural questions about the whole affair and also dive deep into Erlang documentation to determine how they solved problems. These explorations and their results are encoded into the types.
At this point I'm beginning to consider implementation and I can keep filling out just partial implementations against these types. I'll probably make 2 or 3 toy implementations to test out the ideas again with more strength before moving on the final ones. The whole time the types will be guiding that development and helping it move quickly.
Key to this whole affair is the need to describe types utterly before a completely successful library is made... and also the ability to defer the burden of providing type-checking code for as long as desirable. Haskell supports this wonderfully—even more wonderfully with things like Typed Holes and Deferred Type Errors which enable a really great interactive experience I haven't yet needed to employ.
I've used Scala, Clojure, and Go. I found Scala to be too feature-rich for its own good.
It was fun to write (Look at me! I just spent two hours figuring out how to compress this old Java 7 function into a one-liner in Scala!), but reading someone else's Scala was almost as mind-numbing as reading another programmer's C++.
Go is... meh. Quick to learn, easy to write (and read), but you quickly hit a plateau as far as personal productivity goes. I can see the value when working on large teams, but on my personal projects (for which I have limited time) my own productivity is paramount (not to mention I want a language that's fun to use) :)
Thus I've found Clojure is my ideal language for the time being. It strikes a good balance between power and simplicity (and at this point I find it more readable even than Go ).
The way you describe your experience with Scala makes me think you only had a very superficial look at it.
At it's core Scala is very simple & the syntax is very regular, far more than Go or Java and a lot less complex than C++. It's the most expressive typed language on the JVM, so if you like to think in types & you're on the JVM it's your best option.
Clojure is untyped, I hear many people praising it but I don't know any big project done in Clojure. So if you're doing short-lived projects I'm sure it can shine but for software that will be around for more than 5 years I would stay away from it. Btw, if misused, just like Scala, Clojure code can be extremely cryptic.
Go likes it's superficial simplicity, syntactic irregularity & stubbornly refuses to accept that PL design has evolved since the 80-90ies, but I'm sure it's appealing to people who are used to languages from that era.
Go is a language that is a bit tedious to write, no doubt about that, but it's very easy to read. I spend a lot of my time reading other people's code and I really appreciate that.
The fact that Scala as a language allows something like SBT to not only be created, but accepted, means I don't want anything to do with it.
I've suffered long from the Ruby ecosystem's mentality of "look at what I can do!" of self-serving pointless DSL's and frameworks and solemnly swore to myself to stay away from cute languages that encourage bored devs to get "creative".
It's about trade-offs, I guess. Go definitely appeals to a lot of people, and not all of us are unaware of the amazing "progress" that has been made in the 80's and 90's. Awesome progress that brought us Java, SOAP, C++, Javascript-on-the-server, and a slew of other tech some of us want to stay far, far away from.
> Go is a language that is a bit tedious to write, no doubt about that, but it's very easy to read. I spend a lot of my time reading other people's code and I really appreciate that.
Figuring out 1000 lines of code that could have been 10 and verbosity caused by a lack of generics is not going to help you understand code quicker. Figuring out what 10 lines of Scala do may take more time compared to 10 lines of go, but that's not a measure of velocity, the information density of go is just too low. At least 10 lines of scala fit on my screen, 1000 lines of go don't.
Code style issues imho are a team issue, if you do reviews these issues can be managed.
> not all of us are unaware of the amazing "progress"
Look at Rust, at least they did their homework. With Rust out there I can't see any reason to use Go except maybe their crappy GC.
I like Rust too. I re-wrote a parser I had implemented in Go in Rust and got almost an 8x speed-up.
Having said that, the two almost have no overlap for me. I don't see how Rust replaces Go in 9/10 of Go use-cases. And vice-versa.
> Awesome progress that brought us Java, SOAP, C++, Javascript-on-the-server, and a slew of other tech some of us want to stay far, far away from.
When people talk about progress from PL design, they aren't talking about any of those things. If you notice, all of those things were made in industry, not in PL research/academia. Not to mention that those languages are also ones that ignored the PL design progress! (although C++ finally seems to be adding some ideas from PL research in C++17)
They're talking about things like parametric polymorphism, dependent types, modules, macros etc. (I've mostly been reading about work in the types/ML family languages, but I'm sure there's progress been made outside that as well)
> The way you describe your experience with Scala makes me think you only had a very superficial look at it.
I worked with it daily for 2 years and also took Odersky's Coursera courses on Scala. I liked it more than Java (7), though I find the syntax aesthetically offensive (and I realize that's subjective). Ada (I worked in an Ada shop for 3 years before moving to the JVM) managed to have a robust type system without introducing the sort of syntax wtf-ness that Scala seems to need.
I actually recommended against using Scala at a later job simply because I thought the learning curve would be beyond most of the people I was working with (I didn't phrase it quite like that when mgmt asked me for my opinion, of course). Learning to write idiomatic Scala takes time. In that regard it's an expensive language to use unless you hire folks who already have experience with it.
As far as Clojure's dynamic typing goes, you can use libraries like plumatic/schema to add some checking where you need it (interfaces, etc), and for whatever reason I tend to have less trouble (as far as runtime type issues go) with Clojure than I do with Python or Ruby.
But hey, no language is perfect. I kind of wish Haskell clicked for me the way LISP seems to, since on paper it seems to check all the boxes -- but I just can't seem to get very proficient with it (or I'm just not willing to invest the time at this point).
> though I find the syntax aesthetically offensive
Well that's indeed your opinion, whenever I write code in a language with old fashioned statements I die a little inside. I have a lot of experience with ADA too, at it's core it's still a procedural language prohibiting good abstractions, it's very safe but also extremely verbose.
I learned Scala after I learned Haskell, ~7 years ago, maybe that's why I had a different experience. Since I learned haskell I think in types & transformations, it has made me a much better programmer. Clojure has sortof the same mindset, but I would call it 'shapes & transformations'.
> Clojure is untyped, I hear many people praising it but I don't know any big project done in Clojure.
Not sure your standard of big, but:
1. Most of Climate Corporation's (https://climate.com/) backend is written in clojure and they deal truly massive amounts of imaging data and parse/munge it to be useful for their applications.
>Go likes it's superficial simplicity, syntactic irregularity & stubbornly refuses to accept that PL design has evolved since the 80-90ies, but I'm sure it's appealing to people who are used to languages from that era.
Not really. The largest proportion of Golang users come from relatively modern interpreted languages like Python and Ruby which arr widely used in server environments. I'm unable to find the survey results but I do recall this trend surprised the Go's original authors who were originally seeking to replace C++ & Java
I guess Scala being "the most expressive typed language on the JVM" is true in the sense that it has a ton of features (OOP, FP, Exceptions, null Java backwards compatibility, etc.), but that's just too many features for a coherent language.
It's funny that both java & c# seem to be picking up many scala features in their latest & future versions like traits, lambdas, tuples, pattern matching, case classes, closed hierarchies, declarative generics...
Your language isn't incoherent if you can build features on top of each other with is exactly what scala does.
When I read comments like this I wonder if the person posting is trying to justify their own development decisions or seriously trying to "enlighten" the person they're responding to.
FWIW Python is also a great choice for composability. gevent is the only widely used library that really monkey-patches, and with it (to answer the question in OP) you can write essentially any function as a pseudo-goroutine (greenlets that yield to the event loop when reading from a queue which allows you to apply backpressure). And you can still access the vast realm of Python libraries; any that use sockets will automatically yield to the event loop on blocking operations. The criticisms from the recent Python complaint thread are valid, but it's still a great language for using other code.
For JS, rarely do people mutate Object.prototype these days; everything's a functional library. So almost all libraries you use will be good actors. It's also a good choice, though for data management and interop with scientific/vector/tensor operations, it's still hard to beat Python.
Interesting to hear. I like the idea of gradual typing as I'm coming from a predominantly python background and sometimes want to add types as I go. Perl6 looks really promising here. Curtis "Ovid" Poe has a good YouTube video on this. He starts with the Fibonacci function which can easily go wrong depending on a variety of inputs and keeps adding type restrictions such as it has to be a positive int between a range of #'s...i dunno, something along those lines.
As it turned out, more flexibility led to devs writing code that others actually struggled to understand.
This is what happens in almost every language. Niftyness and the prospect of impressing your coworkers distorts the cost-benefit calculation. This is in addition to the true costs appearing months or years after the code is written, involving the interaction of complex factors, like increased cost of debugging.
"Clever" should be regarded as a limited resource. Also, the shop should encourage a culture where "clever" with regards to making code easier to understand should be valued above all else.
This (lengthy) quote comes to mind. If you think it is interesting, please read the whole EWD.
EWD 340 (Prof. Edsgar Wybe Dijkstra) [1]:
"The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague. In the case of a well-known conversational programming language I have been told from various sides that as soon as a programming community is equipped with a terminal for it, a specific phenomenon occurs that even has a well-established name: it is called “the one-liners”. It takes one of two different forms: one programmer places a one-line program on the desk of another and either he proudly tells what it does and adds the question “Can you code this in less symbols?” —as if this were of any conceptual relevance!— or he just asks “Guess what it does!”. From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz. to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language. Another lesson we should have learned from the recent past is that the development of “richer” or “more powerful” programming languages was a mistake in the sense that these baroque monstrosities, these conglomerations of idiosyncrasies, are really unmanageable, both mechanically and mentally. "
I think it is interesting that you left out the very next few sentences, which provide very relevant context:
"I see a great future for very systematic and very modest programming languages. When I say “modest”, I mean that, for instance, not only ALGOL 60’s “for clause”, but even FORTRAN’s “DO loop” may find themselves thrown out as being too baroque."
While I agree with the general sentiment, it's very important to take anything Dijkstra says with a huge grain of salt. He was a mathematician first and foremost. He obsessed about things like a mathematician. Such people, while very smart, make for very unproductive software engineers. They also have a way of sounding deceptively smart and thoughtful, when really they're just talking out of their ass. Beware.
I agree with Dijkstra, the looping constructs of FORTRAN's DO and ALGOL 60 were too baroque. Dijkstra's comments were written in 1977, at that time FORTRAN IV and ALGOL 60's loop semantics were a mess and were common sources of errors.
Without qualification, Dijkstra was one of the greatest computer scientists in our field's short history. While I don't agree with every single one of his ideas, I would encourage budding computer scientists and professional programmers to look over what he accomplished. He most certainly was not unproductive.
- T.H.E. multiprogramming system (one of the first operating systems)
- Concept of levels of abstraction
- Concept of layered structure in software architecture (layered architecture)
- Concept of cooperating sequential processes
- Concept of program families
- Multithreaded programming
- Concurrent programming
- Concurrent algorithms
- Principles of distributed computing
- Distributed algorithms
- Synchronization primitive
- Mutual exclusion
- Critical section
- Generalization of Dekker's algorithm
- Tri-color marking algorithm
- Call stack
- Fault-tolerant systems
- Self-stabilizing distributed systems
- Resource starvation
- Deadly embrace
- Deadlock prevention algorithms
- Shunting-yard algorithm
- Banker's algorithm
- Dining philosophers problem
- Sleeping barber problem
- Producer–consumer problem (bounded buffer problem)
- Dutch national flag problem
- Predicate transformer semantics
- Guarded Command Language
- Weakest precondition calculus
- Unbounded nondeterminism
- Dijkstra-Scholten algorithm
- Smoothsort
- Separation of concerns
- Program verification
- Program derivation
- Software crisis
- Software architecture
This quote by galactipony is really offensive to me:
> Such people, while very smart, make for very unproductive software engineers. They also have a way of sounding deceptively smart and thoughtful, when really they're just talking out of their ass. Beware.
It is quite clear to me that Dijkstra wasn't "just talking out of [his] ass."
I didn't say Dijkstra was talking out of his ass in this case. I'm saying you often can't tell when people like him are talking out of their ass, as they sometimes do. Who would think the inventor of all these algorithms would ever utter a half-formed thought, on a whim? Unthinkable! Yeah... no.
Another distinction you fail to see is that somebody who is great at finding the best or optimal algorithms (i.e. a great theorist) isn't necessarily a productive programmer. To the contrary, perfectionism and productivity are at great odds.
Dijkstra was heavily at odds with real-world programming as it was done, to the point of isolating himself with his work. If we followed his opinions on how to program, we wouldn't get much of anything done.
> Another distinction you fail to see is that somebody who is great at finding the best or optimal algorithms (i.e. a great theorist) isn't necessarily a productive programmer.
I suppose what you're saying here is that there are a lot of business tasks which don't require a great theorist.
I don't think Dijkstra would disagree.
But still, calling Dijkstra an "unproductive programmer"? Really? I'll take one Dijkstra's algorithm over a dozen web apps. And if I can have a patent on it, I can even make a solid business case for that choice.
> If we followed his opinions on how to program, we wouldn't get much of anything done.
And yet, how many man-years of engineering effort could we waste having bad theorists who can quickly hack out LoB code try to re-invent Dijkstra's algorithm?
Perhaps software engineering is a very wide field, and it takes all types?
> To the contrary, perfectionism and productivity are at great odds.
I've learned this the hard way in my career. I always have to fight against taking the extra 20 hours to perfect something when it only took me 1 hour to get to 95% and 95% is more than good enough for the particular task.
I agree with todd8, but would like to add that the next paragraphs were not added for two simple reasons: the quote itself was already too long and I felt it summarizes well enough the gist of the argument: programming languages should not allow 'clever' tricks.
But, lets discuss the 'for clause' and 'do loop': these constructs were made specifically for one kind of simple loop. It is not a systematic solution for an iterative process. To me it seems Dijkstra specifically aims for languages such as LISP (which, with the renewed interest from Clojure is one of the most oldest successful (semi-) functional programming languages).
"I agree with todd8, but would like to add that the next paragraphs were not added for two simple reasons: the quote itself was already too long and I felt it summarizes well enough the gist of the argument: programming languages should not allow 'clever' tricks."
It still seems intellectually dishonest to leave it out. Clearly, what Dijkstra in 1972 considers too "clever" may in fact be tools that are now basic building blocks of everyday software. We can all agree that "too clever" is bad. We can't agree on what "too clever" is.
"But, lets discuss the 'for clause' and 'do loop': these constructs were made specifically for one kind of simple loop. It is not a systematic solution for an iterative process."
I'm fairly sure (from what I remember him writing) that he doesn't like it because you can do the same thing with existing constructs, so you'd be adding complexity to the language that isn't strictly necessary. History has shown that actual programmers prefer having for loops like Algol.
"To me it seems Dijkstra specifically aims for languages such as LISP (which, with the renewed interest from Clojure is one of the most oldest successful (semi-) functional programming languages)."
Probably not, or else he would've talked about LISP in a different manner (he does talk about it in an earlier paragraph).
Yes, programmers prefer it, but the general structure:
`for (init-statement ; boolean-continuation-expression ; iteration-statement) statement;` is syntactic sugar for a specific imperative process (with the iteration and ending expression appended at the end of the block). It does not generalize to other imperative processes, it cannot be transformed into a meaningful expression and it invites 'clever' programmers to do 'too much' in the various for-clauses.
Wrt to LISP, do you mean this part? That seems to align well with my standpoint: use very few basic principles and be stable.
"The third project I would not like to leave unmentioned is LISP, a fascinating enterprise of a completely different nature. With a few very basic principles at its foundation, it has shown a remarkable stability. Besides that, LISP has been the carrier for a considerable number of in a sense our most sophisticated computer applications. LISP has jokingly been described as “the most intelligent way to misuse a computer”. I think that description a great compliment because it transmits the full flavour of liberation: it has assisted a number of our most gifted fellow humans in thinking previously impossible thoughts."
> We can all agree that "too clever" is bad. We can't agree on what "too clever" is.
Of course we agree. Too clever is "stuff I'm too lazy to understand". What we don't agree is on the definition of I; everyone has their own binding for that symbol, which carries a context for different types of stuff we are too lazy to understand.
Interesting take. 'Too clever' for me is: 'using underlying concepts which have semantics that have a poor mental overhead versus applicability within the domain'.
Correct me if I'm wrong, but was Dijkstra not known for never actually using a computer. He wrote code with pencil and paper. There's a gulf between academic code and the needs of the day to day programmer. That said, I agree with his statement about one liners. Code should be parsimonious. It should never be a puzzle to understand what the code is doing.
Dijkstra decided to program using pencil and paper much later, and mostly because he (like most good programmers) already knew the solution before writing it down.
I correct you again for implying he wrote purely academic code. This is plainly false: for example, he specified part of and implemented the compiler of the Algol-77 language. He implemented one of the first multi-layer (ring-based) OSes.
And then you mention the day-to-day programmer is not in need of academic code. To the contrary! Dijkstra argued that the software crisis (the gap between what computers can do, and what they are actually doing) is caused because day-to-day programmers do not have the right tools and knowledge at hand. In EWD 340, Dijkstra argues this is caused by the mental overhead caused by clever tricks and languages allowing these tricks.
As a professional, I am constantly relating to concepts and code which have an academic basis. Examples are from type theory, lambda calculus, paxos, map-reduce, queueing theory, compression algorithms and many more. I have seen many programming languages, wrote assemblers, interpreters and compilers, worked professionally with imperative, OOP, functional and (higher-order) logical languages, so I dare to say that programming is not about what you can do with the language, but what the language does with you.
While this may have a lot of truth to it, the ironic thing here is you are posting this in response to a Dijkstra quote urging us down the path of humility.
Tricks are fun- little puzzles. But they don't belong in code unless they both (1) significantly (relative you the task) save time and/or resources and (2) are very well explained in comments- like the full non-clever version being left in the code.
I know I'm preaching to the choir here. I know we've all come back to our own code six months later and had to puzzle it out.
More on cleverness from an old AI textbook (Artificial Intelligence Programming by Charniak, Riesbeck and McDermott):
"1.12.5 Cleverness
Avoid it . Clever tricks ... [Lisp specific stuff omitted.]
To paraphrase Samuel Johnson, we advise you to look over your code very carefully, and whenever you find a part that you think is particularly fine, strike it out."
Agreed, but they also complained about map() and flatMap(). I have to think that eventually every developer can understand the more straightforward monads, functors, and Either - which, along with type aliases, IMO, can make code a lot more readable.
I think the more esoteric features should be reserved for complex code, especially if it's possible that those features can prove runtime correctness at compile time and result in easily-composable modules. I wouldn't expect more than one or two developers to need to develop such a subsystem, however, so the "clever" is used in isolation.
Good for them! Their problem domain is likely so simple and neat that it fits the Go's limited built-in types well, and does not lead to frequent copy-paste programming. If so, Scala has been an overkill.
Lucky bunch indeed. I'm 9 months in and still missing map(), flatMap(), let alone more advanced FP.
I guess this is a matter of personal preference, but I definitely consider:
users.filter(_.active).map(_.email)
to be easier to read (and not a pain in the ass to type) than:
emails := make([]string, 0)
for _, u := range users {
if u.active {
emails = append(emails, u.email)
}
}
return email
The first style is more explicit about what you want to do, the second style is more explicit about how you want to do it. Go made imperative vogue again, but for me it feels like a throwback all the way to Turbo Pascal. Sure, the Go compiler is really fast, but Turbo Pascal would beat it hands down - on 20Mhz machines with 1MB of RAM.
completely agree. There is a certain beauty to the first version that once you've learned the concepts it seems a shame to have to write the second one.
The second example is vastly more readable in terms of understanding what it is supposed to do.
I mention this as neither a programmer on either Scala nor Go.
If it is of any relevance, I've been writing code for about 20 years across C, Java, Javascript, PHP and shell scripts. I'm sure other FP developers will roll their eyes to a traditional developer like me but the one-liner you mention is simply not self-explanatory for an outsider when reading it.
"I've been speaking italian, french and spanish for 20 years. I'm sure people who speak germanic languages will roll their eyes to a traditional latin-languages guy like me, but the german sentence you mention is simply not self-explanatory for an outsider when reading it."
It returns 'email' instead of 'emails' which I assume was a typo, which of course the compiler would complain about, are you claiming that a variable name typo is somehow a 'subtle bug' ?
Only other thing I see directly is that 'users' is undefined, but this is a code snippet after all, not a working program.
I think kaoD's point is that more code will always produce more errors, and more code (in the name of "readability") can give more surface area for bugs.
I program since the mid-80's, many of the features in modern languages weren't always available outside research institute walls, and yet we managed to deliver any sort of software with the tools we had at our disposal.
However, now that those features are part of the majority of mainstream languages, I surely don't want to go back to how I used to write software in the first two decades of my career.
Developers have been using C and a bunch of other imperative languages for years. They obviously, created software two order magnitudes more complex than CockroachDB (or Docker, or Kubernetes, or etcd). We used to write entire operating systems in assembly, heavens forbid. Just look at the PostgreSQL codebase (all C) and compare that to cockroach DB.
Being "a better C" was the original battle cry of Go. It's definitely better than C (well, at least for some thing), but it doesn't mean all of these complex C software people used to write are toys.
Go is obviously not a useless toy language either. But that doesn't mean you won't miss features from more expressive languages. The amount of calls for generics from Go users tell a different story.
Some of it, probably. But some are probably using Go out of necessity (your company enforces Go), or because they still think Go is their best pragmatic option, lack of generics notwithstanding.
The guys who wrote CockroachDB never, afaik, tried to write it in Scala, and never told us whether they miss flatMap, generics, or something like that.
They guys from Movio did; their problem domain is likely different.
A for-each loop is only slightly longer, more familiar (just about every language has it), and more flexible - it covers map, flatMap, and reduce. So why have three or more specialized constructs when one will do?
There are cases when the restrictions of map() and reduce() are necessary, most famously for a map-reduce in distributed programming, but that's not really relevant for implementing a simple function.
Map, fold, reduce and friends are statements of intent, and reading a line of transformations composed on multiple maps and such is generally much clearer than reading a series of loops in which you have to figure out intent by looking and what's going on inside each loop.
Also, error density is relatively constant per line of code according to most studies (this is a very general statement, but we can assume that most code just isn't trying to be clever). I've forgotten to add(), append() and drop items inside a loop far more than I've made mistakes inside more functional methods.
I would argue that for loops are imperative, map/fold/etc are more declarative. That leads to various benefits such as less code, fewer bugs, fewer off-by-one errors.
And, man, tail recursion in a language with function head pattern matching (ML family, Erlang) is so much easier to read than any complicated for loop.
(Update: realized afterwards that "foreach" isn't quite the same as "for", but the larger point still stands, mostly.)
And a goto statement is even more general, covering all kinds of looping constructs, exceptions and even the eliminating the need for functions! Why not just use those?
Technically, reduce covers map, flatmap, filter, etc. It's every bit as flexible as a for-each loop (the only difference is it implies no side effects, but does not guarantee them if the language doesn't).
The very reason those other functional constructs were split out was not due to need, but due to -clarity-. And that same clarity applies to why to use one of those constructs instead of a for-each loop.
I'm definitely in camp map/filter/fold, but if you instantiate a list/accumulator, fill/operate on it inside the loop, then consume it, you're essentially writing pure code. The fact that you're doing it on top of mutable foundations doesn't matter (at that scale).
The trouble is that safe mutation and dangerous mutation look very similar. It's possible to write a function that performs safe, locally encapsulated mutation, sure - but it's much harder for a reader to confirm those properties in code review or when debugging compared to just seeing that the code doesn't do any mutation at all.
There just isn't that much to get wrong in pure for-each loops:
def filter_broken(widgets):
broken_widgets = []
for widget in widgets:
if widget.is_broken():
broken_widgets.append(widget)
return broken_widgets
The actual errors with loops and mutable objects come from complex nested loops, continues/breaks (multi-level ones especially!), extra boolean conditions, and sharing mutable objects between parts of the code base.
I think that overstating the supposed risks of for-loops doesn't help anything. The problem is not that you're going to mess up a simple for loop. It's the ways that your code doesn't compose well that are more problematic. Oh, and the verbosity sucks too.
Hm. Wouldn't your code reverse the order of the list?
Maybe that's right, maybe that's wrong, but it's indeterminate from the code whether it's desired, whereas a filter function and a reverse function would make it explicit.
Append is a function that adds it to the end of the list. (I think my code is working Python, but I haven't actually run it).
Btw: I'd venture to guess that you're not a Python person, and that's why it's not obvious that it constructs the method in the same order. In imperative languages (where Python isn't the perfect example b/c it has filter and list comprehensions), you get so used to these loops that you don't have to worry about questions like that.
If you wanted a reversed list, you'd either name your function appropriately, or call reverse after you run the filter method. It's a little painful to do the first bit.
> So why have three or more specialized constructs when one will do?
Because having three specialized functions for three specialized operations makes it easier to see what a given piece of code is doing. One function per function.
Didn't exist at the time. While I still prefer scala to java 8, had the latter been available at the time I likely never would have dabbled in scala in the first place, if that makes sense
I dabble in what sorts of programming languages, but when comes to work, it is mostly C#, Java, JavaScript and some occasional C++, because that is what customers pay for.
Hence why I see a big value in mainstream languages slowly evolving into multi-paradigm ones, as many of us don't have the luxury to move beyond the first class languages of each platform.
side effects ? testing ? decoupling ? (thus reuse)
If your brain is organized enough to write clean for loops then ... maybe. But it's a bit opportunity for problems. At least to me; but then I may not be smart enough.
I think for tasks such as JSON parsing, map and flatmap like functions are really helpful. You write less code and the code is way easier to understand (for those who are familiar with map functions).
That being said, map and flatmap force immutability in some way. So you pay a price for this, either in speed or in memory, even with tail recursion.
I'd much rather deal with concise, clever code that has a sane interface and works, rather than sprawling long winded code where everything is void and the same low level constructs are used everywhere.
And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".
>And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".
IMO, the answer is people vastly underestimate the cognitive cost of reading code. "Code" rarely exists in a vacuum, the reader has a universe of inputs, outputs, problems, solutions, failures and goals. The cognitive cost of trying to read someone's code is something that must be paid multiple times per day and at one point do you have to wonder - is the gain in expressiveness for a single programmer worth the cognitive cost of the multiple programmers who have to parse that code.
I'd imagine the answer is no - especially for companies that spend more time iterating and patching based on customer feedback than actually designing and architecting systems. My belief is that expressive languages have their place but companies are far more likely to build their software in a "patch-test-iterate" environment which favors less expressive languages.
I'll keep repeating myself: the cognitive cost of "code" is dwarfed by the cost of understanding an application. If a couple esoteric languagn features take an extra few hours to learn but reduce LOC tenfold, it's a price I'd pay every time.
Not everyone feels this way, the investment to learn these features may not pay off right away, and if you come from a "move fast and break things" language (python/php/js) it can feel like a waste of time.
> If a couple esoteric languagn features take an extra few hours to learn but reduce LOC tenfold, it's a price I'd pay every time.
Issue is that a beginner pre-conditioned long enough on "verboser, imperativer" models/languages, when reading/comprehending such a codebase, has to literally mentally expand every 1 line into 10 upon reading, for quite a while .. some weeks, some months, as more intuitive grasp sets in slowly over time. Probably what happened at OP's quoted company and with the different speeds in comprehending what "clever" (compact) code some of their coders came up with, what they flippantly called "code that was harder to understand by others"..
If a couple of esoteric language features can consistently reduce your codebase tenfold, you might want to rethink how you engineer your apps. Unless you're writing your application in FORTRAN (and I mean the version before 1958 and the introduction of procedural code) or assembly, I doubt there are any two general-purpose languages that show a 10x difference in code size in large applications.
> And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".
There are two problems with this argument. First, brilliance in code is not a property of what linguistic abstractions the code makes use of, but the elegance of the algorithm and engineering. I sincerely doubt that no brilliant code had been written prior to the introduction of your favorite linguistic abstractions. Second, even supposing you're right, what difference does that make? If your code's maintainers are dumb, do you think that you could change this reality by making life harder for them, or is it your job to adapt yourself to the reality of the system you're a part of? When creating something for dumb people to use, is it a sign of good craftsmanship to make it harder for them to work with?
> And honestly, sometimes I don't know if code is too "clever" or the reader is just too "dumb".
If you ever wonder this, then it's definitely a problem with your code and not the reader. Code that cannot be easily read and understood by others is worthless in any commercial setting.
If you're too numb or stubborn to learn the tools that your programming environment provides for writing higher quality code, I have little sympathy and would suggest that such a person fond another line of work.
If, for instance, someone were writing C# and insisted on hand-rolling for loops in all cases[1], because LINQ is "too hard", that's them being lazy and unwilling to invest a tiny amount of effort in learning more effective methods.
[1] There are some cases where it's more performant to use a plain foreach or bare for, but unless you're in tight loops or dealing with ginormous collections, it's a premature optimization.
I don't agree that all languages are created equal in this regard. I think that there are cultural norms and expectations that come with certain languages that make them more or less susceptible to "cleverness."
For instance, python has never had the problems that ruby or perl had.
As the saying goes: it's easier to write code than understand it. Therefore if you write code as cleverly as you can, then by definition you're not smart enough to understand it...
For those that like trivia, the original quote I believe is by Kernighan and it's about debugging code rather than writing it, which I feel makes more sense:
"Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?"
Yes but occasionally there's no other option. I once had to code a parser for a data format with an odd non-BNF grammar with a bunch of special cases. In order to meet the functional requirement for user-friendly error reporting when parsing invalid inputs I was forced to write really clever (in a bad way) code. Fortunately we haven't found any serious defects in it because I don't think I understand it well enough to debug it.
This is one of the extremely cool things about python. There's always one right way to do this, and it's usually pretty obvious. There's also quite clear, standard style guidelines, so code is generally formatted pretty much homogeneously, and tends to be a lot more readable than many other languages.
Per the Zen of Python, there should always be exactly one sensible way to do something, but unfortunately this is not often the case. List comprehension can be achieved just the same with functools, itertools, explicit for loops, etc.
That said, I do tend to have that mantra a little more present in my head when I'm working with Python.
yep... making 'understandable' is really important.
But given that discipline, scala would outshine go since scala has much better typechecking than go - e.g. http://getquill.io/ can typecheck against a running DB schema, etc.
It got really bad the past 10 years or so with the "look at what I can do, ma!" blog posts where someone spends 4 pages violating the language in order to get clicks. It's especially endemic in Ruby-land, I find.
I always work with a few developers that complain about my long variable names and aversion to certain shortcuts like ternary operators.
They don't understand that unclear code is probably the number one cause of technical debt. Nobody wants to waste time trying to understand it so they start to attach workarounds and it just keeps getting worse.
Some of their code is so "clever" that I've refactored the line with 7 method calls just to understand what the hell is going on.
Lambdas and fluent syntax make me quiver with fear. In the wrong hands they let you do unspeakable things
> aversion to certain shortcuts like ternary operators.
> They don't understand that unclear code is probably the number one cause of technical debt.
At the same time, verbosity can have an obfuscation quality all of its own. For simple assignment, I find a ternary operator very clear and concise, and much preferable to a 5-9 line (depending on style) if/else for a simple assignment. It also might keep you from using the single statement version of if/else if your language supports it, and that's probably justification in itself given how many problems that's caused in the past.
Well, that's only "directly what you mean, without special operators" if you come from a C style procedural background and have already internalized all the special operators you've included there, such as parenthesis and braces. Sure, that's most people, but that doesn't mean they aren't operators.
Ruby has something similar, and I can't stand it. I think the conditional is the most important item in the phrase, and it's shoved off to the right. If you lead with the conditional, it becomes immediately apparent that the assignment is predicated on the result of a branch.
Ruby (and Python) likely get that from Perl, which has post conditionals, but with specific qualities to prevent them from too much abuse, and which also prevents them from being used in the way presented here (which is why I didn't trot them out earlier, as much as I was tempted by the "you write what you mean" line). The limitations are that there is no else branch, and it only applies to a single statement, so you can't have a block executed with a post conditional. It leads to usage like so:
die "Invalid param: please enter a positive number" unless $param1 > 0;
$param2 = 0 unless defined $param2;
return undef if $param1 and not $param2;
my $foo = 1 if $bar; # This unfortunately creates a closure around $foo and is a big source of bugs.
As much flak as Perl gets, quite a lot of thought went into making it flow similar to how people think and talk (which is no surprise if you know Larry Wall is a linguist by training). There were some missteps, but it was very early in this area, so that's expected.
Perl hung onto too many of its warts for too long to stand a chance of competing with Ruby and Python. Only recently has Perl5 introduced real function parameters instead of unrolling @. Flattened lists are another one but the worst is having to specify "use 5.020;" if I'm using Perl 5.20. They've even carried this "tradition" into Perl6 where you have to specify "use v6;" at the top of EVERY damned script. That's progress? Prefixing every variable with "my" is another one which found its way into Perl6. Why can't an advanced language have default lexical scope?
> Only recently has Perl5 introduced real function parameters instead of unrolling @.
Yet there have been modules that support it for years, and with much more features than what was recently rolled out (which was meant to be conservative).
Here's[1] what I said about this quite a while ago. Named parameters with type checking (unfortunately at runtime). I've been writing Perl using different modules (Function Parameters) which use the same syntax for about six years now (for functions, not all the sugar on Moose objects).
> Flattened lists are another one
Flattened lists never cause me a problem. If they cause someone problems, I think they've never really learned what context is in Perl. Once you know how context works in Perl and had a chance to use it to good effect, I can't imagine this complaint persisting. Perl is fundamentally different than most languages in this respect, even if it looks superficially similar to more procedural languages. This is actually a cause of a lot of problems for novice users, because they assume their experience in C/Algol derivatives will map exactly, and where it doesn't people get frustrated.
> but the worst is having to specify "use 5.020;" if I'm using Perl 5.20
What? You don't have to do that. If you want to use newer features that utilize keywords which may conflict with whatever you've written or whatever modules you are using, then yet, you need to opt into those. Perhaps you would have preferred if it silently just broke?
> They've even carried this "tradition" into Perl6 where you have to specify "use v6;" at the top of EVERY damned script.
No, you don't. If you do, and you run it in Perl 5, it will automatically swap out the interpreter for whatever Perl 6 interpreter you have in $PATH though.
> Prefixing every variable with "my" is another one which found its way into Perl6. Why can't an advanced language have default lexical scope?
The requirement to define your variables is not because it's not lexical by default (it is lexical by default, you can use no strict to see). .It's strictness which is enforced, which has been found by the Perl community to be vastly preferably to automatic instantiation of variables because it prevents bugs, and prevents a lot of confusion. You have to define your variable, because the Perl community found that a more sane default.
Agreed, plus obviously your "if" statement doesn't do the assignment to usefulMetric. One more way the ternary wins (along with functional languages that use "if"s as expressions).
At some point you have to rely on policy and not language constraints. I submit that no language is constrained enough to protect against refactoring stupidity while also being flexible enough to be useful to the average programmer on the average project. If not ternary if, it will be something else. So, do you throw out every alternative method to accomplish the same thing, or do you put policies in place to keep the code sane, such as "no chained ternary operators are allowed" ?
In all honesty, I prefer having rules that have no special-case 'unless' issues. It's too much effort/trouble to remember all the cases where things don't work. I'm a good engineer but a terrible compiler.
I believe part of learning a new library/framework/language is to limit yourself to a certain subset of the API offered. After working with Ruby (the language) and Javascript (the ecosystem), I feel like that's the only way to preserve your sanity and productivity. I don't need to know 4 different ways of creating a lambda in Ruby, selecting 1 that can express the other 4 is good enough.
---
In this case, the rule would be no ternary operators, since they work well unless you nest them or unless you make them long/complicated.
Other examples -
You don't need to wrap if conditions unless you have a multi-line body:
if (myCondition)
x = 42;
y = 23;
Early returns simplify short circuiting logic unless your function becomes too long:
if (myVariableAtBeginningOfFunction) {
return true;
}
...
// 2 screens later
...
if (x == 42) {
return false; // why am I not getting false?!
}
Using a variable as a conditional in javascript to test against undefined works well unless the value can be falsy:
if (person.isStudent) {
showSchool();
}
if (person.age) {
showBirthCertificate(); // what if age is 0?
}
> You don't need to wrap if conditions unless you have a multi-line body
Why not "keep unwrapped if conditions on a single line"?
> Early returns simplify short circuiting logic unless your function becomes too long
Why not "keep functions short"?
> Using a variable as a conditional in javascript to test against undefined works well unless the value can be falsy
Why not "only use conditionals on boolean values"?
I'm not saying your rules are right or wrong, I actually follow a couple of them myself, but your wording implies that other people are simply not following rules, or their rules have a lot of nuances and special cases, but the reality is more likely that their rules are different.
Ultimately we all make different connections and form different patterns in our head. As long as a team can agree on a code style, within a few months everyone starts developing the same cognitive patterns.
It's because it's too easy to lose some of these nuances during refactoring/development blindness. I'd go so far as to say it's inevitable.
If you come into a 3 year old codebase and during the first 2 weeks you need to add extra functionality to a 30-line function with an early return, are you going to refactor the early return? Or are you going to extend it into a 32-line function? What about the new hire after you?
Alternatively, your team has decided to embrace the "only use conditionals on boolean values" philosophy. You're working with a section of code that reads `if (myVar)`. It's been 3 hours, and you don't understand why the code's not working. Suddenly, you realize that at some point `myVar` was refactored from a non-nullable boolean to a nullable number, and someone missed changing this.
And the biggest offender yet - code that is grouped within a file into 'logical sections'. I've never seen this work out. What is a logical grouping for you is a confusing pairing for me. Or maybe it's that I can't immediately grok all 2000 lines of a file I've never seen before, and know where to place the method. This madness around code location is one of the quickest ways to code rot.
---
The perplexing thing to me is that these situations are completely preventable.
If you don't use early returns, scenario 1 won't happen.
Scenario 2 won't happen if you use real comparisons e.g. `if (person.age !== undefined)`
(Similarly, `if (person.age != null)` breaks when null and undefined start meaning different things...)
And lastly, a canonical alphabetical/visibility ordering for methods in a file of any length is unambiguous. I don't care what the order is, as long as there is a canonical order.
---
I understand that other teams have their own rules. It's no trouble at all to adjust to things that are purely syntactic differences. But when the rules that are chosen hide lurking semantic pitfalls...I don't know why you'd risk shooting yourself in the foot.
A lot of my strong feelings on code style come from the book Code Complete. I highly recommend that to everyone who hasn't read it. It's filled with examples of confusing/broken code you might inherit, and teaches you how to avoid creating it yourself.
Edit: looks like we hit the HN thread depth limit. Happy to continue this over Twitter, check my profile.
That's just laziness on part of the refactorer. At that point, you need to use an outer if-else statement. Ternary operators are confusing when nested.
IMHO it's a wrong approach. Every programming language, just like the spoken ones, has it's common shortcuts and idioms. The fact that they're commonly accepted and used is what makes them easy to understand. Your brain learns to recognize them quickly, often much quicker then the long version. With newbies and programmers who switched from other languages problem is that their brain is just not yet trained to do that efficiently. Instead of investing some time into getting used to the peculiarities of the language that they use, they then try to avoid them as "complicated". By lowering a bar too low, and avoiding using these patterns all together, you encourage people to never train their brains to recognize them effortlessly. And by definition of common patterns, they're, well, common, and they'll keep running into them all of the time. Also keep in mind that you're probably bothering others, more skilful ones, with unnecessarily verbose code which is to them harder to quickly scan through.
I'm not saying that one should go crazy with one-liners or uncommon patterns, but things like ternary operators used with reasonably short expressions in a single line of code are totally valid and should be readable to any average dev out there.
> Also keep in mind that you're probably bothering others, more skilful ones, with unnecessarily verbose code which is to them harder to quickly scan through.
I stopped contributing to one Powershell repository because author thought that ps is hard and he wanted Get-Process. I put a "i am the greates babysiter meme" in PR and that was considered very disrespectful
Particulary
> Also keep in mind that you're probably bothering others, more skilful ones, with unnecessarily verbose code which is to them harder to quickly scan through.
In general context yes, but if you tell your top contributor that besides doing full day job work for entire year for free (while main author also has commercial offering) he should also babysit "dumb" users (making entire job not fun) and you get the tip multiple times that such behavior will alienate him from the project, you can be sure there is a way better approach to project management. Since I left it, the PRs and issues that nobody looks at started to pile up (I kept both at almost 0) which is extremely important given that project relies on constant PRs and reports by the community.
Its FOSS setting, not a professional setting. Being a jerk to people that do excelent stuff for your project for free is far from appropriate in any setting on the other hand.
Your underlying assumption that everyone working on the code will be skilled is wrong in any large team. It's not like it takes a lot longer to read a 3 line if statement than a ternary operator.
Terse code isn't much faster to read, the difference between a 300 and 400 line file isn't significant.
Your attitude of "he isn't 1337 enough to understand my code" is the logic that leads to no comments and horrible to maintain code in the first place
> It's not like it takes a lot longer to read a 3 line if statement than a ternary operator.
It doesn't take a lot longer to sit down and understand how ternary operators work, either. C'mon, it's not differential equations, it's just a notation, and a fairly simple one. It might take a newbie slightly more time at first to understand the logic. We've all been there once, you stop and stare at it for 15 minutes, but after a few times of deciphering it you get used to it. It's not about being 1337 (I surely hope that it's not what's considered elite this days), it's about learning new stuff and any averagely intelligent person can do it. Honestly, would you really hire someone who is not capable to (in a reasonable amount of time) teach him/herself how to read a ternary operator? What programming would that person be capable of doing in future?
This is not really as subjective as you think. Research in software engineering shows that certain structures are more prone to errors than others. We know that higher cyclomatic complexity leads to more bugs, more statements lead to more bugs, and certain usage patterns lead to more bugs.
Smart people can disagree, undoubtedly, but there's a reason why GOTO is considered to be brain cancer and pattern matching is generally considered to be great.
Sure but I'm talking about things like "break things down into lots of small functions to make things more readable!" vs "keeping code together makes it easier to read!" or "make things verbose so it is easier to read!" vs "conciseness makes code easier to read!"
On a lot of those I know what makes code easier for me to read. It's not the same as some of my coworkers. Based on some of your phrasing I suspect we'd agree on a lot of them, fwiw
When the metric used is "easier to read" it becomes far too subjective IMO, things you mentioned are similar but not quite the same
In my experience, very experienced programmers end up converging towards very similar idioms: terse expressions for common patterns, clarity when the domain is complex through verboseness if necessary, and just keeping things as simple as possible unless there's evidence that complexity will reduce technical debt in the future.
I don't really see highly competent devs doing the whole J2EE architecture astronautics anymore, nor using single-char variable names. There's a tendency to write things concisely when simple, and then moving the complexity away to some other place, stashed in its own function, when it reaches a certain mental threshold of complexity. There's a tendency to use the best features languages have to offer, maximizing simplicity through orthogonal features and repeated idioms, while discarding unnecessary cruft; one of the marks of junior devs is their desire to try to fit problems into new idioms just to test out language features or strange design patterns.
Given that human intelligence is fluid but the variance just isn't that high (after all, we all have a similar amount of working memory), common design practices emerge out of this understanding for our limits for reasoning about problems. Exceptions abound in extremely technical and complicated problems (just check out non-trivial linear algebra code or bit-flipping, low-level device drivers), but the most part it just sticks out how things are made to look simple within a finite range of tradeoffs. This has been my experience in my domain of expertise; look at most current web development frameworks and they respect common patterns even in radically different languages that are really about the essence of the request-response cycle, not made-up constructs of additional complexity or a restating of the problem.
We had a policy at my last place that any SQL joins alias the table name with a single character alias. The rule resulted in the most insanely confusing stored procedures I've ever seen. Whoever came up with that is a complete idiot
It's a judgment call where small functiosn are more readable than cohesive code.
If the 'idea' of the code isn't easily broken down into abstractions, even mentally speaking, then small functions will just obscure what's actually going on by pointing out all the implementation details that are wound together.
I think the point is that there's a wide range of styles that are readable, but the condition for readability is also related to the skills of the coder to clarify. And that range is ample but also finite.
> but there's a reason why GOTO is considered to be brain cancer and pattern matching is generally considered to be great.
Look at any large C codebase and you will see plenty of goto statements to manage resource cleanup. The problem is using goto in place of structured control flow like loops and if/else. Statements like yours make it seem like there is a conceptual problem with a jump.
Sure there are some cases in C where you want a particular control flow that the language doesn't allow, and goto is the best solution in those cases.
But all the examples of this that I've seen are still structured it's just that the language isn't able to express that structure. The most common examples are jumping out of nested loops (better solved by allowing named loops so you can use break/continue) or jumping immediately to error handling logic (better solved by exceptions, or even just simplistic try/throw/catch).
I do think there's a conceptual problem with arbitrary jumps.
That's why I'm a proponent of code ownership. We should be coding to each others interfaces instead of constantly poking around in the same shared codebase. It just leads to pointless re-writes and low quality - a tragedy of the commons.
Legacy shitcode is legacy shitcode. If it's owned by someone at least we can push for a sane interface, which is better than the abandoned communal messes I encounter in the real world.
And that sort of scala code is extremely meaningful to other folks, which captures my point. I see scala code all the time that would give me an instant headache but there are people who would find that more readable. To each their own, the is to work with people who are at least somewhat aligned to your sensibilities.
+ Any decent developer can read decent code in Java or whatever normal language and get along just fine.
+ Only a few people can deal with Scala - and even fewer if there's a log of specific project Scala weirdness used in a particular program.
So sure - among a narrower set of 'Scala friendly' developers, and possibly within that even narrower set of people familiar with the 'Scala weirdness' of a particular project - those people can 'get along fine'.
The problem is that this can be a pretty narrow set of people.
Scala would have to represent a pretty big advantage to propose it's general weirdness as something to bother with.
I don't think it does - hence the 'de-adoption' of various entities.
My gut tells me it's past the threshold - the 'extra power' offered Scala just isn't quite worth it's weirdness for most things, and so most devs won't learn it ... and so then it becomes less valuable from a business perspective.
That's not what you said. You said others, not most people. The most important aspect is who you surround yourself with. For instance the scala folks at Verizon basically live in the zone you're talking about and it is fine for them even though half the time it makes no sense to me
> They don't understand that unclear code is probably the number one cause of technical debt. Nobody wants to waste time trying to understand it so they start to attach workarounds and it just keeps getting worse.
True, but once get into the length of variable names in iOS and Android development, you're in a whole new territory. 38 character variables have no place in life.
The design of a language/environment can sometimes exacerbate this particular developer problem. Because so much of the power of Smalltalk lay in its powerful debugging, weird proxy stuff had a potent impact.
Scala is the latest whipping boy(1). It's a great language with tons of warts, but it actually acknowledges the warts.
Case in point, when (2)Paul Phillips went after Scala (mentioned in the article), Odersky took some of that criticism to heart for the next iteration/rewrite of the Scala compiler. In an industry where everyone doubles down, that's extremely refreshing.
Scala's cognitive footprint can lead to misbehaving programmers, but clean Scala has its own elegance if the cuteness is avoided. The slowness I'll give you though :(.
1) I'm old enough to remember this language CoffeeScript that "really really sucked". And then all of a sudden, people were using ES6/TS with parameter destructuring, classes, lambdas, but yeah, CS was the bad guy.
2) Despite his bellyaching, Paul Phillips never really left Scala the language (check his commit log), just the compiler team & lightbend.
Well Coffeescript was refreshing in being very terse and introducing a lot of niceties that were missing in JS. It was also trivial to introduce footguns given its whitespace rules and some pretty radical syntax rules.
New Javascript has basically taken the best that CS had and wrapped it in the necessary turd that is backwards compatibility in a hastily designed language, but the results speak for themselves and modern JS is just much more pleasurable.
Sorry if my footnote's sarcasm was unclear. I really liked coffeescript, I still do. It never had the support it needed to succeed but was a decent enough improvement to be worthwhile. Plus being included in the rails Gemfile was the shot across the bow transpilers needed.
Typescript/ES6 took a lot of the goodness from CS, but I wish they took more. Hell, I wish the typed coffeescript became a thing.
Yes, the frustrating thing was that people previously praised the language _despite_ its obvious footguns. A language where 'dropping parentheses where you can' is idiomatic is a disaster waiting to happen.
I don't know about you, but I never want to be holding a footgun, ever.
Coffeescript worked pretty great when using it on solo projects, because I could use the bits I liked (which where very nice) and ignore all the footguns that I didn't like. The problem as such was that any coffeescript I wrote turned out not be all that idiomatic.
Some experience to share: I studied Scala and FP on the side before jumping to a team that was using it in production. Most of the engineers on the team have an enthusiasm to learn about and use fp.
Bi-weekly we have a book club where we take turns presenting a topic from functional programming in Scala, functional reactive domain modelling and others.
We program as simply as possible but when a new technique is discovered we go ahead and use it after it's been presented to the team and everyone is comfortable with it.
all code is reviewed and unreadable code does not pass
Compile times haven't been an issue. As an example a 100k line Scala program with around 900 files takes around 2 minutes to rebuild whilst incremental changes are immeasurably fast. Reloading code while a local server is running is easy by default in IntelliJ.
Using worksheets for playing is often useful.
we don't use actors where streams would make more sense and vice versa, know the purpose of your tools
I've had bad experiences with Go. I know that for the application I'm working on it would not scale to 10 programmers working and constant refactoring due to business goals changing.
I haven't tried Go myself and I've been meaning to, but I have spent the last 3 years writing Scala code. Before learning Scala I didn't have any real experience with FP, and I think that was the real learning hurdle for me. Once I learned the FP ideas Scala became my preferred language. I think it's really funny but Java and C# have been becoming more Scala like, and to some extent the latest version of JavaScript are also starting to become Scala like. What I have come to accept is that Scala does take people time to learn, but it's not typically Scala the language, it's the FP aspects of it that are the stumbling blocks.
PS. I also work on a 100k Scala program and can go from clean to compiled in less than 2 minutes, and incremental compiles are extremely fast.
If you don't have code reviews, and don't have any sort of regular teaching sessions that push people towards similar conventions, then I can see how code can become unreadable. But we really haven't had that issue w/Scala at all.
> Bi-weekly we have a book club where we take turns presenting a topic from functional programming in Scala, functional reactive domain modelling and others.
I think this is precisely why a former eng director at Twitter is quoted as saying he'd do Twitter in Java if he started over again. You have to spend a lot of time studying Scala. This is a no-go for any big org that wants to be efficient at horizontally scaling up engineers.
It works great for small teams or orgs that don't need to hire engineers like crazy.
Looking through the really abstract scala code they linked brings up a problem that really frustrates me in haskell. Why doesn't anybody document their really abstract code? You know it's going to be confusing, so why not help out? If I have a type like
It's not sufficient to document the function's arguments. You also need to document the type variables! Likewise in haskell with code like
f . g x . (h (i x) $ j y z) . k $ aNamedVariable
It really isnt' so hard to refactor that into
let descriptivelyNamedFunction = (h (i x) $ j y z)
anotherDescriptivelyNamedVariable = descriptivelyNamedFunction . k $ aNamedVariable
in f . g x $ anotherDescriptivelyNamedVariable
It's much larger, but in certain places it prevents so many headaches. It's great that you can put stuff inline, but both communities seem super lax about accepting code that is the opposite of self-documenting.
I mostly agree with you, but I also think a bit part of what Haskell-like FP shows is how often your code is invariant to so many things that the variables no longer have any sense to them whatsoever.
That doesn't really excuse badly named variables when that doesn't hold. It's more that it's something sort of novel to a lot of Haskell-like programmers and so we all get excited about it and probably overdo it somewhat. But on the other hand, I think it's very well-justified often enough.
For instance, with
class Functor f where
fmap :: (a -> b) -> (f a -> f b)
there are many words you could give to f, a, and b but they're essentially all misleading.
With another example from the article, Strong Syntax, the "Syntax" bit is basically a convention in scalaz that ought to be immediately obvious if you're familiar with the scalaz library. The "Strong" bit has to do with a subtype of "Profunctor" structures, "Strong Profunctors".
Profunctors are generalizations of functions that show up all over the place. Strong profunctors are profunctors which can "distribute over a tuple".
Giving names to these types is an exercise in futility. At least with a function `a -> b` might be named `in -> out`, but with a Profunctor there isn't even that intuition. It's just too general. Subsequently, it shows up all over the place.
With profunctors there's not necessarily anything going in or out and especially not necessarily and notion that the thing going in produces a thing going out.
That's all roughly true with one kind of profunctor, a function arrow, but not true in general.
For instance,
data Counterexample a b = Cx (Set a) b
is (very, very, very nearly [0]) a profunctor, but a isn't necessarily "going in" and b isn't necesssarily "coming out".
[0] It's a profunctor if a is finite. You can handle the infinite form by writing it as `data Cx a b = Cx (a -> Bool) b` which is equivalent to what I wrote when `a` is finite... but it also makes it a little easier to pretend that a is "going in" even if that's a bad intuition.
I'm not sure I see how the intuition is bad here? What would the lmap implementation for your (Set a) example be if not equivalent to the (a -> Bool) case? And how is that not an example of data "going in"?
More generally even if there are Profunctors for which this analogy isn't perfect, I feel like the intuitive type names are still more useful than random letters. Especially for a relatively advanced concept like Profunctors, for which I expect pretty much all users to be comfortable with the idea that a type variable does not necessarily imply that concrete values of that type will exist.
It is equivalent to the (a -> Bool) case. In Haskell at least, all "negative type parameters" are ultimately generated by the left side of a function—but this is more a concern for how ideas are modeled (perhaps partially) in Haskell then a fundamental one. If you're writing code that's
forall p . Profunctor p => ...
then `in` and `out` aren't generally valid.
I definitely hear the argument that ergonomically it might be a good idea to use a sort-of-appropriate model to drive better terminology... but at the same time I think there are drawbacks. I think it helps the early part of a learning curve but then hinders the latter part. An expert doesn't care what they're called since they're just mentally erasing the names as appropriate anyway. A non-expert will try to carry the metaphor further than it can go and gets stuck.
This is why there's endless debate about calling Monoid "Appendable" or something like that. For the commonest cases that's the right idea... but the first time someone questions why there's an Appendable instance for Bool (two, actually!) you're fighting with your own inappropriate metaphor.
"Oh, Appendable actually means Monoid but we didn't want to say that straight up."
This is what I really hate about functional programming (specially Haskell), is almost as if the more obscure and unreadable your code is the more competent you are deemed to be.
As a small example, let's say I have a typed "agg" function with associated typeclass:
class typedAgg t where
agg :: t f r a b -> (r a -> b) -> f (r a) -> f b
...
To lots of people, that's going to be really confusing. It might be easier if I document that f is supposed to be the type of the table, r is the type of the row, and a is the type of elements in the rows, if that's the intended usage.
But I'm saying, if you have a Higher Kinded Type like Strong, what do you name the variables? They don't refer to actual nouns. We're abstracted from that level.
In my example, t, f and r are HKTs. Does that clear up what I mean at all? Strictly speaking in my example, f (r a) is the type of the table, but it's still illustrative to say that f is the type of the table, or say that it's probably a functor or at least similar to one.
Specifically with Strong, I'm not sure what I'd comment, as I'm not that comfortable in Scala yet and don't know what Strong is. I'm not going to go digging around and there are basically no comments in that file, which is the problem I'm talking about.
My company also have a lot of problems with Scala, both technical and non technical ones, but so far we have managed to control it:
- As for slow compilation times, incremental building helps a lot.
- Intellij works most of the time, and if it doesn't a few type annotation will fix it.
- It's very hard to find people with Scala experience. We have given up on finding those and decide to train people from the beginning for 2 months. I myself studied mechanical engineering in university and I can code just fine after a few months.
- The language itself is very complex, so we let more experienced programmers write the backbone/framework/library/common parts and the less experienced one do the code glueing-job. That way we can have Scala type safety and expressiveness without scaring the juniors. IIRC one company used Haskell also do the same and they said it is very hard to introduce runtime bugs because almost everything were caught during compile time, "If it compiles, it works".
You may ask why going through so much trouble just to use Scala. There are many reasons (speed, type safety etc..), one of them is that we can be immensely productive when needed, the conciseness of the language combine with a powerful type system let us implement complicated features rapidly without very few bugs. IMHO Scala strives to combine both FP and OOP on top of JVM so a lot of tradeoffs had to be made. The developers have to learn a lot of concepts and have good self discipline, but in exchange we can write fast, robust systems and even enjoy it.
(Edited for better formatting, this is my first time posting here.)
+1 for "If it compiles, it works" .... been there done that ... multiple times !!! And yes in the beginning the feeling of "it works on the first run" was just strange and it get me some time getting used to it
I'm always eager to learn how we can improve Scala, especially as we kick of the Scala 2.13 cycle (hard at work on compiler performance and standard library improvements). Email is 'adriaan.at("lightbend.com")
I don't think that Go is a good language to compare with Scala. (I do like Go, though.) The two languages could not be more different in philosophy. Scala is maximalist - you can do things many ways, you can call java, you can have tremendously intricate types, etc. Go is minimalist - there are just enough tools to get by, and sometimes it feels like you are missing one. My experience with Scala is that you spend more time telling it what to do but not how to do it (and it is often not obvious exactly how things will be done), while in Go you have to tell it both what and how to do things, which results in longer code, and repetition, but less ambiguity. You can't add to Scala to make it more like Go - the only way to make it more Go-like is to remove from it, which is impossible.
I think a more appropriate comparison for the language would be F#, which is probably not a surprise to you. I have never used Scala professionally, so I can't give any suggestions that would improve the use of Scala for day to day programming. Years ago I was learning 1 language per year, and picked up Scala and F# that way. After completing the Coursera courses on Scala, I put together a few projects on github using it, enough to get some job feelers that ignored my "don't send me job offers." And I realized that while I enjoyed fiddling around with the language on my own, I didn't want to spend my professional time deciphering other people's Scala code, and so I dropped it. Take from that what you will - maybe I am just not cut out for it.
I do use Go professionally, although it is a minority language where I work.
Thanks for your balanced reply. Sadly, some people see it as a badge of honor to write super clever code that's essentially write-only, and Scala somehow triggers this in them :-)
We, as the Scala community, play an important role in shaping the culture of programming in Scala as one that embraces simplicity as the true elegance, maintainability and testability, friendliness and openness to criticism. The language will remain flexible (though we're always looking to remove warts), it's really up to your company culture to decide how to use it (which is different for different teams over time).
Many big players, such as Twitter, have done a great job with that (and continue to do so).
Yes maybe some of the success stories regarding changing from X language to Go are actually because Go enforces behavior at the language & tooling level that could potentially be enforced culturally at the company, but for whatever reason, the company has not been able to develop. Kind of, "if you don't play well with your toys, we take them away," instead of teaching them to play well from the start. When you're just a senior developer, you probably can't change the culture but you might be able to change the language for some applications.
There is the old jeremiad to not use technology to fix cultural problems, but when you're just a part of a much larger institution that may or may not have the ability to intentionally change its programmer culture, it can make a lot of sense to move to a language that reduces your reliance on those cultural behaviors if they are lacking.
Some of that could be addressed automatically in Scala with code standards enforcement like with scalacheck. And you can help with good practices like code review or pairs programming. But in a lot of places there is no appetite for "wasting time" on stuff like that (I vehemently disagree with that kind of attitude, but changing other people's view is not easy).
The advantage of Go is that left to their own devices, people will tend to gravitate towards more readable code, in a standard format, using standard tools that are pretty good in most situations. The entropy of having a bunch of people work on a project will then work in favor of coherent approach and style, instead of tending to diverge into using the tools they like best in the format they prefer.
"Go" the language is tricky to search for as just the word "go", so I wouldn't read much into that. Many job ads contain the word go, but have nothing to do with go the language. e.g. "...go to our website..." "...go above and beyond..." etc.
I just wanted to take this opportunity to thank you and the team at Lightbend. There is a clarity of thinking and expressiveness in Scala that I haven't found in other languages. I have used Scala professionally for the past 3 years and enjoy it immensely as a language.
- the actor model, along with the Akka implementation, has nothing to do with functional programming; and it isn't orthogonal either, since an actor's mailbox interactions are definitely not pure, with actors being stateful and at the same time non-deterministic; in general if you place those in the same article, there's a high probability that you never did functional programming; and if you did actual functional programming (as in programming with mathematical functions), then you wouldn't want to go back to a language that makes that impossible ;-)
- Akka actors are not the only game in town for processing data, you can also use Finagle, FS2, my own Monix and even Akka Streams; And yes, concurrency often requires multiple solutions because there's no silver bullet and Go's channels suck compared with what you can do with a well grown streaming solution
- Scala's Future are not meant for "hiding threads" and 1:1 multi-threading is actually simpler to reason about, because if that fancy M:N runtime starts being unsuitable (because lets be honest, most M:N platforms are broken in one way or another), then you can't fix it by choosing a more appropriate solution without changing the platform completely
- Your devs are maybe lazy or maybe they don't give a fuck, but given that you're supposedly dealing with concurrency in your software, if those developers struggle with a programming language, then it's time to invest in their education or hire new developers, because the programming language is the least of your problems
- Paul Phillips still works with Scala and he most likely hates languages like Go, so when you mention him or his presentation, it definitely cannot be in support of Go
I don't understand the claim that Akka actors are both impure and non-deterministic.
If your actor is communicating only through its inbox, then it is both pure and deterministic. Given the same set of messages in the same order, you arrive at the same actor state. Sure, you can do wacky things with side-effects, but that's not akka's fault.
> in general if you place those in the same article, there's a high probability that you never did functional programming; and if you did actual functional programming
This sounds a lot like the "no true Scotsman" fallacy. I'm not attacking your argument here, but perhaps you could expound upon that first point and clarify.
> Given the same set of messages in the same order, you arrive at the same actor state
You just described object identity (see [1]), which are objects whose state is determined by the history of the messages received. An object with identity is stateful, side-effectful and impure by definition.
So no, an actor is almost never pure or deterministic. I'd also like to emphasize determinism here, because you can never rely on a message ordering, given their completely asynchronous nature, so you get something much worse than OOP objects with identity.
> This sounds a lot like the "no true Scotsman" fallacy
I'm talking about my experience. Given that I'm currently a consultant / contractor, I have had a lot of experience with commercial Scala projects initiated by other companies. And in general the projects that use Akka actors are projects that have nothing to do with functional programming.
This happens for a lot of reasons, the first reason being that most people are not in any way familiar with functional programming, or what FP even is for that matter. Not really surprising, given that most Scala developers tend to be former Java/Python/Ruby developers that get lured by Akka actors and once an application grows, it's hard to change it later. Evolution to functional programming happens in a web service model where new components get built from scratch.
But the second reason is more subtle. Functional programming is all about pushing the side-effects at the edges of your program. And if you want to combine the actor model with functional programming, you have to model your actors in such a way as to not contain any business logic at all (e.g. all business logic to be modeled by pure functions, immutable data-structures and FP-ish streaming libraries), evolved only with `context.become` (see [2]). So such actors should be in charge only with communications, preferably only with external systems. This doesn't happen because it's hard to do, because developers don't have the knowledge or the discipline for it and because it then raises the question: why use actors at all?
Because truth be told, while actors are really good at bi-directional communications, they suck for unidirectional communications, being too low level. And if we're talking about communicating over address spaces, for remoting many end up with other solutions, like Apache Kafka, Zookeeper, etc.
On combining Akka actors with functional programming, I made a presentation about it if interested (see [3]).
I have a simple question. In the article they mentioned they had a concurrency issue with a timed buffer that they later neatly solved with go channels and goroutines. They said that they solved the problem in Scala by moving to the actor model, but that required importing Akka into their project and training everyone how to use Akka.
My simple question is: couldn't they have achieved the heart and soul of the actor model by just making an object on its own thread, and talking to that object on a simple synchronized message queue? It's a handful of easy to understand lines of code, and nobody needs to delve into the sea of madness that is learning and configuring Akka and its actor model.
In more general terms, it's possible to use Scala as Java that plays well with immutability and functional programming techniques without turning your codebase into an overly complex difficult-to-understand mess. But for some reason people just can't stop themselves.
For what it's worth, Elixir hits a real sweet spot of functional goodness, combined with awesome concurrency without getting too deep into bizarre complexity.
I think it's becoming a more commonly held opinion in the Scala community that people often tend to go off the deep end with Akka and I tend to agree with that. In particular, I think that most of what people use Actors for can be done with Futures, and what can't be done with Futures can most of the time be done with Akka Agents (http://doc.akka.io/docs/akka/current/scala/agents.html).
And when I use Actors, I tend to want o wall them off in their own place in the codebase, instead of letting the actor-ness touch multiple parts of the code.
The post, somewhat surprisingly, omits to mention Akka Streams (which is a perfect fit, since, as the post mentions 'The data came through a stream [...]'), and Reactive Kafka (which is an official Akka project https://github.com/akka/reactive-kafka ) which solve this exact use case.
These projects/modules have been around since 2014, and we've been presenting / sharing information about them a lot since then. Perhaps this post can be interpreted that we need to put even more effort into the discoverability of them (in fact, we are right now reworking documentation for this and more). Using Akka Streams, a complete re-implementation of the example use-case mentioned in the post would look like this:
Too bad the team missed these... I would have loved to point the team to the right (existing) abstraction/library in one of our many communities (github, gitter chat and the mailing lists - we're pretty nice people, come around some time), rather than posting like this. What we've learnt here though is that we need to work even harder on the discoverability of those libraries - and indeed it is one of the things we focus on nowadays (with docs re-designs, cross links, better search and more).
Anyway, just wanted to let you all know what Akka has in store for such use cases, Streaming is definitely a first class citizen in the toolkit.
Actors should be in a process no? If it's in a thread then it'll take the main process down with it.
Erlang's BEAM VM every actor is in it's own process. So if something goes bad then it can be restarted via supervisor.
Scala is on JVM and JVM isn't built with concurrency in mind and I think Erlang's BEAM is too good at this. Akka is gimped too, you have to write actor a certain way iirc other wise it takes over the scheduler. BEAM is preemptive, it doesn't matter if you have a for(1) loop, your process/actor can only take some much of the cpu time.
I think hands down Erlang is a really really beautiful language for concurrency. It's syntax is ugly but it's such a small language that does everything you need for concurrency. Scala is just big and there are so many way to shoot yourself in the foot and tons of compromises. I also think implicit type is too magical and shot myself in the foot many time using libraries that use implicit type.
Erlang's processes are green threads, they're just called "processes". Neither Erlang, nor Akka run 1 thread or process per actor, they just schedule multiple actors over N native threads inside the single process (ignoring multi-node situations).
Two notable differences: Akka Actors concurrency is done at library level and an actor can block a JVM thread if not coded carefully. Erlang processes concurrency is support at VM level and there's no way an Erlang process can block a VM scheduler (native code aside, but with native code all bets are off)
Yeah I'm actually learning Elixir and eventually Phoenix.
Erlang's syntax took a while to get used to but the community wasn't for me. There were no momentum really, it was really hard to convince anybody that Erlang needed some killer framework that people can get behind. Or hell anything to get excited about other than BEAM and that's behind the scene.
Elixir is beautiful but some of the syntax is meh for me.
I found the Go solution for that issue a bit odd - channels aren't for data processing, they're synchronisation primitives. Using them the way they did in the article ruined the piece for me, since it reads like a rather uninformed decision now.
Let me rephrase: not suitable for large amounts of data that need high throughput. It's easy to see that the overhead is prohibitive for applications such as in the article, if you just benchmark it.
Maintaining a Scala code base for some years, I've learned a lot. I would not go back to a language that does not support Option/Maybe and map/flatMap. These really changed my coding style.
My largest problems [1] are all still there after years, developers only payed lip service and that killed Scala I think.
The largest bad design decision was to support inheritance which leads to it's own problems with type inference. Sad that after Java devs already recognized how bad inheritance is that Scala also got inheritance.
The sticking out problems is how very very very slow Scala compiles. This makes web development (even with Play) and unit testing a huge pain (and the complicated syntax + implicits + type inference makes IntellJ as an IDE very very slow on detecting problems in your code)
Concerning the article I do think Futures are a more powerful (and higher) concept compared to coroutines. They are easier to combine IMHO [2]
Now trying Kotlin for the faster IDE and compilation speed, sadly the Kotlin developers think Option is only about nullable Types (it's not and something differen!) and don't embrace it.
The only thing on your list on your blog [1] that's still true is that we care about PL research. Since 2.10, we've worked really hard on improving the migration between major versions, and the feedback has been very positive. We'll keep working on finding the right balance between ease of migration and fixing issues in the libraries. Scala 2.13 will be a library release, with further modularisation of the library (towards a core that we can evolve much more slowly, and modules that can move more quickly, but where you can opt to stay with older versions as you prefer).
We've also invested heavily in incremental compilation in sbt. Sbt is meant for use as a shell, and it's super powerful when used like that. When I'm hacking the compiler in IntelliJ, recompiles of some of the biggest source files in the compiler (Typers.scala, say) take just a few seconds. I rarely have time for office chair sword fights anymore.
With Scala 2.13, half of my team at Lightbend is dedicated to compiler performance. We'll have some graphs to show you soon, but our internal benchmarking shows our performance has steadily improved since 2.10.
I still have the problem of upgrading because not all of the libraries are cross compile or do work. At the end of last year we've upgraded one library which cost us many days.
Next is upgrading Lift to 3.0 which will be a nightmare (again).
"We've also invested heavily in incremental compilation in sbt."
Yes, I read this over and over again, and I see micro benchmarks posted.
Using SBT with continuous unit testing I can't feel a difference - or it is so slow with a major code base that it's still much to slow and I judge it having no progress. Either way, after years it is still too slow (newest Scala + newest SBT).
"I rarely have time for office chair sword fights anymore."
Today I expect Kotlins practically instant compilation. 10 seconds for compiling some changed files is already to much for rapid development with TDD/Web, it breaks my flow, but YMMV.
"We'll have some graphs to show you soon,"
See above, I've seen dozens of micro benchmarks that claim improvements, in the end it doesn't show up in my real projects - at least in mine and the person who migrated to Go in the linked article. And all the other blog post authors that moved away from Scala towards something faster (Kotlin, Java 8, Go, ...)
But as I've said, I've moved on to Kotlin for new projects because for me Scala is a lost course.
Another site note: I would never argue with my users and tell them how wrong they are about the product and that their perception of the lack of some feature or quality is wrong.
The Scala compiler is indeed slow, mostly because it has to do a lot more work than the Java or Go compiler. However, in my experience Sbt's incremental compilation works well for small to medium sized projects. Beyond that we need a bigger hammer, and we're working on a parallel (and later, distributed) Scala compiler [1].
> Now trying Kotlin for the faster IDE and compilation speed, sadly the Kotlin developers think Option is only about nullable Types and don't embrace it.
Because Kotlin's native support for nullable types makes `Option` unnecessary.
There are many things you can express with ADTs that you can't express with nullable types, and once you have those, Option is simpler than extending the type system.
EDIT: Another thing you can do with Option is define generic abstractions that work on it and other types, like map/flatMap. This in turn means you can write generic functions over anything that can be flatMapped which work automatically for Options. (I don't know if there's anything equivalent in Kotlin though?)
For me nullable type expresses something semantically different than Option. Option is a higher concept expressing optionality - duh ;-)
- contrived example - it might make semantical sense to express Option[Option[A]] as a type, it does not make sense to have wrapped nullable types (except as a result of nested function calls).
Nullable types feel like a bugfix to null, Options fell like a concept to model business domains. Same as None expresses something different (not there) than null (usually e.g. in Java conflating not there with not initialized).
With Option it also makes sense to have flatMap, for comprehensions etc.
I think you're only meant to use nullable types in Kotlin for exactly that purpose - expressing a value that may or may not be there (aside from the compatibility with Java libraries of course).
For things that just cannot be initialized directly in a constructor, you have more idiomatic constructs, such as the `lazy` property delegate, or in the worst case, the `lateinit` keyword (though at that point it may be better to rethink the design of your interfaces).
For indicating that an error occurred, you have exceptions.
I think that if you have to move to microservices just to avoid atrocious compilation times that means there's something horribly wrong in the language stack. Microservices should be used to solve architecture problems, not a workaround for compiler slowness.
This mentions weak IDE support as one of Scala's pain points, but Go has very much the same problem, and the language is vastly less complex.
The two best IDEs for Go right now IMO are VS Code and the EAP Gogland, but both of them are not yet on par what you get with Java. VS Code has only support for very rudimentary refactoring (renames) and relies on a rather slow horde of external CLI linters (executed on save) to provide code analysis. Gogland has pretty nice refactoring, but its built-in code analysis is still too shallow.
Both will get better, but I can't take the claim that you'd move to Go from another language due to "lack of a good IDE". If you want the best IDE move to Java (or Kotlin, or C#) and never look back.
I explicitly mentioned Gogland (the Jetbrains IDE).
I'm not a fan of heavyweight editors (including emacs), and I'd gladly just use vim for Go if vim-go had refactoring support beyond the useless gorename.
Gofmt works nicely across the board, but godef needs your entire package to compile as far as I remember. Neither of them gives you refactoring support, and gorename doesn't give you much either.
You seem to think godef/gofmt is enough which is fine, but in this case Scala has the same level of support as Go.
Go can be written without good IDE. I just use Sublime with Go plugin it works great. Also Gogland gives sub-par experience to many experienced Go users. E.g. it does not use `gofmt` but their own formatter which they use for all IDEs. Not sure about their refactoring tool but `gorename` seems to me fine refactoring tool.
Gogland's non-standard formatting is one of my pain points. But the standard go* tooling doesn't offer any proper refactoring.
Unless if by refactoring you mean "Jus' renaming my functions and variables and nothing else, and only when all my packages compile correctly, with 80% guranteed success" then Gorename is a good tool, I guess. YMMV.
Obligatory "did you try running sbt ~ compile before you started complaining about the compile times" post.
It's amazing how the tools for faster turnaround times exist-- the one thing the Scala community needs to do a better job of is clear, opinionated documentation.
Could you explain this a little bit? I am just getting into Scala, usually I use `sbt run` and `sbt test` while I am working, then `sbt dist` to package my production app. What does `sbt ~ compile` do ?
If you run `~` before any command, sbt watches the directory for any source changes, and on detecting source changes, redoes the command. So `sbt ~compile` will re-compile any new sources as soon as they are saved.
Or just use IntelliJ and you can reload classes while your program is running if you want. Sbt is a very powerful tool but it's not written with readability for new users in mind.
I would consider Scala a far better language from a safety, syntax and design perspective. Go is compelling due to the low GC pause times, and to some extent the community for some types of software. I would imagine switching between these to be quite rare. I would expect more movement from Scala to Swift or perhaps Rust.
Stroustrup is right, there are languages that people complain about, and languages that no-one uses. Go is now firmly in the former camp. Even if the complaints are always generics and error handling.
I remember when software development was a gentleman's game and we looked forward to sharing a half hour together in somebody's office or around the railing or coffee pot to exchange ideas and news while the build finished.
Scala is good. Its not perfect, its not appropriate for every situation, but its worth learning and using.
Go has an anachronistic feel about it that is just, well, ugly.
If you come from an imperative programming background, as this team did, (typed) FP is a new way of thinking. It takes a long time to adjust to this new mindset, particularly if you've been programming in imperative languages for a long time. Go, on the other hand, is more of the same. It doesn't seem that anyone on the team had a background in typed FP, and it doesn't seem that they had any support from the organisation to make this shift. I'm not surprised it took them a long time and didn't end well.
Learning new things is hard and until universities and other training providers start teaching FP these kinds of stories will continue. (The alternative, to have companies investing in training and mentoring [plug: my company provides these services for Scala] doesn't appear to be likely en-masse, even though it would be radically cheaper than throwing away code.)
Wonder where they'll go once their codebase becomes crippled with interface{} type and nil pointers check, and they stop using chanel to return to mutex for performance reasons, or more control.
Note : sounds snarky, but it's an honest question i find asking to myself after having tried many server side techs and feeling limited with go.
1) I have a totally different experience ! When your are writting code which must have a certain degree of genericity you end up having a ton of interface{} …
2) wat ? How having a null pointer a non issue ?! Empirically it causes fewer bugs than in Java or JavaScript, but it's still a really common source of bugs in Go.
3) I've never run into this kind of problem myself since I dont use Go for performance-heavy code.
I've been on HN for years. Scala has been widely criticized with these exact same criticisms for years. These are all legitimate issues people have, even if you don't think they're valid.
It took some of us six months including some after hours MOOCs, to be able to get relatively comfortable with Scala
Huh. Yeah, sounds like Go is a really good choice for you guys! For some reason our team is able to onboard new engineers and have them be productive in less than a month...
No, there's a huge difference in quality of programmers. I've led a team that was productive (by which I mean shipping code that met business goals) within weeks of starting with scala. I've been on other teams where some of the guys still couldn't understand closures after 6 months.
> I've led a team that was productive (by which I mean shipping code that met business goals) within weeks of starting with scala.
Personally, I like to measure things like that by how fast a team/developer can get up to speed on maintaining an already existing non-trivial codebase in the language. Shipping greenfield projects is often easier than adding features to an existing one, particularly in languages like Scala.
Shipping greenfield projects is often easier than adding features to an existing one, particularly in languages like Scala.
I don't agree. I work on an incredibly large Scala team and a powerful type system gives you a lot more confidence to make everything from minor bug fixes to sweeping refactors.
The price paid for having to learn the complexities and power of Scala (which I'll admit is much harder than other languages) pays off in that it lowers the overall application complexity by reducing the need for a plethora of frameworks/libraries.
If you write Scala like you'd write a java/python/Go application, you'll have better type inference and a worse IDE, along with slower compile times. Not a great proposition. If you write Scala like a pure functional language that allows you to create powerful libraries and DSLs that make it possible to hack together very reliable applications, you'll be amazed.
I don't think I'd agree with that. I tend to learn languages by looking at what others have done. Doing something from scratch in a new language with unfamiliar libraries is really hard. Digging into someone else's codebase -- assuming they've done a good job at writing clean code and avoiding being "clever" -- to make small changes is way easier when you're first starting out.
Scala is an excellent language from a safety and design perspective. It's biggest flaw is the lack of a corporate sponsor and higher barrier of entry. Map, FlatMap, Options are all really great but hard to grok at first.
Thanks, glad you like it! We at Lightbend (my employer) don't think of ourselves as very corporate, but we definitely sponsor Scala development. My team is hard at work on Scala 2.13 (well, except the part of it that's commenting on HN stories).
wrt to programming languages, going from scala to go is like going from english professor to honey boo boo. go is basically new php with lots of ftm features baked in. scala, for all it's flows stays true to it's goal - it is a scalable language. use as mush of it as you are comfortable with and keep growing. here is btw one of the "coolest" features of go in scala http://storm-enroute.com/coroutines/docs/0.6/101/
I don't know, types help me think. I get lost real soon without them and the reason why I could never pick up any Lisps. In Scala/Haskell I can come up with a solution incrementally and types are the biggest reason why. I always have that feeling that I am missing out on the trumpeted awesomeness of Lisps but I can never justify using them. Maybe there is a personal angle to language selection but I just can't help but feel types are the future and a real advancement of technology.
Some points might be valid it's really hard not to dismiss the whole article when you write things like this:
> The funny part is that, because dependency hell is so ubiquitous in Scala-land (which includes Java-land), we ended up using some of the projects that we deemed too complex for our codebase (e.g scalaz) via transitive dependencies.
First, I've just checked, there are 237 dependencies in my classpath, and it never caused me any issue. Dependency management is a complex problem, but the JVM ecosystem does a pretty good job at it. Binary incompatibilities that are specific to Scala are pretty much non-existent today outside of very specific cases.
Secondly, why use stuff that you deem too complex and then complain that it is? Transitive dependencies are just that, transitive. I have Cats and Shapeless in my classpath and I haven't found yet the need to use them in my own code.
It is too bad though that e.g. IntelliJ cannot tell the difference, will offer you the transitive dependency's stuff in code completion, and now you are tied to it. Of course, the thing _you_ used went away in the newer version that the newer version of your direct dependency uses, and now you're spending time figuring out dependencies instead of writing code. Which, in my experience, happens pretty much all the time in Java/Scala land. (and I won't start about the sorry state of build tools for the JVM; seen them all, nothing is as nice as what I'm using now (Elixir/Mix)).
Nice write-up. I wonder if you guys have tried using Scala as a functional programming (_no_ vars, returns, partial functions, exceptions, mutable data structures, etc.) - or just used it like in the stateful procedural world.
I think the problem here is that Scala doesn't make any choices for you. Now you're busy having to do a ton of code reviews to make sure that everyone stays within the chosen paradigm, stuff will of course slip through and bite you, and there you are with one big nice mess. Plus, it's too simple to just pull in a Java library which doesn't mesh well paradigm-wise with what you have (or a Scala library for that matter). I've done a ton of Scala and I've been left with the same conclusion as I had waaaay back with C++ - too many potential solutions, too many pitfalls and ways to make a mess. I guess I'm a person that wants to make the language make the paradigm choice for me and then I'll happily stick with that. One reason I like a language like Elixir so much - the language made the choices, it matches the problem set I work on pretty well, all the libraries look the same, life is simple, I can just write code (instead of trying to understand a moderately complex build.sbt file which basically comes with a paradigm of its own).
Worked with a principle engineer once who used language as a litmus test. If you could not understand functional programming like Lisp or MLs, he'd just know not to get you on his team.
Obviously, most of the software could be written by monkeys, and only need to produce trivial functionality, in those cases, please switch to Go or at least stick to Java or C#. I say that because in the hands of someone who doesn't know, more expressive languages can cause havoc, and turn really messy. Especially the hybrid languages like Scala.
Leading teams to victory and accomplishing business objectives is an even more important litmus test, in my book. But different strokes for different folks I guess!
I agree. But personally I'd be concerned about a team lead who thinks the make or break thing is whether a candidate engineer loves fondling their monads all day.
Ya, it's a little draconian, but in practice it was more nuanced. If you just didn't know those things, it was fine, but if you couldn't learn and eventually understand them...
"..it felt quite empowering to have the confidence that, supported by type-checking and a few well-thought-out tests, my code was doing what it was meant to. "
Does this mean that on what software is "meant to do" type checking covers most test cases and only few additional is needed to ensure iy does what it's meant to do?
That's not what I meant to emphasise, but kind of. I was contrasting the Scala programming experience to this: https://eev.ee/blog/2012/04/09/php-a-fractal-of-bad-design/ which seems to be pretty much the point expressed by the Coursera people in the blogpost I quoted from them.
"Part of my frustration with Scala (and Java) was the feeling that I was never able to get the full context on a given problem domain, due to its complexity."
I see. Smaller image size and lower memory usage was the critical factor informing the decsion to use Scala and not Java?
If we're talking server provisiong cost bread ($) I think factoring in the costs of stack switch, code rewrites and salaries should be on the table too!
p.s. Have been writing Go code since the day it was released. My comment has nothing to do with their decision du jour.
Really well written. We have a similar story where we re wrote parts of our Java code in go and the build time has reduced from 15m to 3.6m! Coffee break to reading a small medium article.
sure. Never denied. But we are breaking apart that java code part by part. The docker build size is also very small as mentioned in the article. Our app involves lot of bandwidth so we good with smaller containers.
More importantly, how much time did it take to rewrite code that took 15m to compile! Are the machines working for the man, or the man for the machines ;)
As this is the 3rd language in your code base, 1st there was java, then there was Scala and now GO. I do hope all Scala code is gone by now! I assume this decision and investment was extremely well supported, but that is not obvious from your blog post and supporting comments here.
So 12 months of effort -> at Bay area salaries and costings that is in the order of magnitude of 300,000USD. Your parent company had 6.6 million NZD profit last financial year. So for investing around 5% of the total profit of the larger group your mayor benefit is reduced compile times. Plus some other second system benefits.
That is a rather expensive decision to have taken, and I think if you had been forced to do a proper ROI investigation before hand you would not have gotten that. In my experience when doing these calculations and getting the numbers down it is never the financially wise choice to change languages but instead actually fix the pain points that people have with the language and most often the actual project setup. It is only when a language change is incremental, adds a key feature or is a market requirement e.g. for deploying on certain devices that a new language makes financial sense.
I am not judging your GO decision over Scala as a better language but I am judging your management layer for doing this. Especially as it sounds like from other posts that GO is now 1 more language to support in your company. Increasing certain costs outside of yours as a team in cross team training opportunity, support and monitoring etc...
Of course sometimes changing language is like changing from speaking French to Italian in the office and hoping that office politics will disappear... Been there done that, it did not work out as hoped by the developers.
Sorry I was confused between your experience and the opening post which was moving from Scala to GO. So I assumed you were working for Movio and used their numbers. Therefore my post does not make any sense :)
Then it was a Java to GO move. I wonder where your compiles take the most time. For our work building the jar files are the most expensive part. Javac is about 20 seconds all in for about 3000 files to compile on my 2009 macbook.
Regarding LinkedIn, they are not moving away from Scala AFAIK. And Yammer was a long time ago. Note that Scala has improved quite a bit since then, especially with the release of 2.12.0 (Java 8 support).
I don't understand why, but there seems to be a lot of negativity aimed towards Scala. It's a solid language backed by the JVM, has great Java interop, and beautifully combines OOP and FP in a way I've never seen in any other language. Also, ScalaJS[1] is absolutely amazing.
Yet every few weeks, you get a blog post detailing why Scala is a failure and how it will be dead in a few years. Seriously, what gives?
I don't care much for language battles, but maybe I can shed some light on this.
The former VP of Platform Eng at Twitter said in 2015, "What I would have done differently four years ago is use Java and not used Scala as part of this rewrite. [...] it would take an engineer two months before they're fully productive and writing Scala code." The VP of Platform Eng at Twitter expressed regret for the choice of Scala.
LinkedIn's SVP of Eng Kevin Scott said, "We are not getting rid of Scala at LinkedIn. We've recently made the decision to minimize our dependence on Scala in our next generation front end infrastructure which is rolling out this year. We've also made a decision to focus our development infrastructure efforts on Java 8, Javascript, Objective-C, Swift, C++, and Python given the nature of things that we are building right now and will be building in the foreseeable future. That said, we have a ton of Scala code running in production, and will continue to provide basic support for Scala and those Scala systems for the foreseeable future."
And Coda's Yammer letter is around for those to Google.
Companies don't ditch languages that they have a huge investment in. It's just not worth it when you reach a certain level. It's not just the code to run your services, but the huge investment in performance, tooling, and monitoring - not to mention the bugs that come up from code that's just a little different. Twitter is past the scale where it can ditch a language. So is LinkedIn. But they can regret choices or decide to concentrate future development in a different direction.
I think that Java 8 has dulled some people's enthusiasm for Scala. Java 8 comes with a lot of nice features that people really missed in Java and might choose Scala because it did have them.
But I think more generally, there's been a movement away from being enamored with features, cleverness, and terse syntax in a language. Scala is a language of features, cleverness, and terse syntax. I was at a presentation by Rob Pike (one of the Go folk) and he argued that there are many things you can add to a language that feel clever and satisfying to write, but obscure what's actually going on and ultimately the things that are annoying, that are time sinks, and that cause problems aren't that you had to use a loop rather than something more clever. It was a long time ago so this is potentially more what I took away from the presentation than Pike's sentiment.
If we look at Go, Go is boring as hell. I mean, it has one cool thing in it - the goroutines (with cool being defined as something generally not found in all programming languages). It doesn't even have a lot of the cool things Java has. Java has annotations, inheritance, generics, a cool lambda syntax, "final" immutable references, advanced codegen. . . Go doesn't even have a way for me to declare an immutable variable. But it's so easy to pick up because there's basically nothing unfamiliar to someone who has experience with a mainstream, imperative language. Really, the syntax might be a bit different and it will always take a bit to adjust to a new language, but goroutines is really the only feature of the language that should be "new" to users (maybe multiple return vars, but that's stretching it). It generally emphasizes the ease of understanding and debugging over terseness of code. People seem to be enjoying that (at least some people). Part of it might just be that when a new philosophy comes onto the block, a lot more is written about it than gets written about old philosophies. For example, DHH just argued that Rails' all-in-one package is still a huge value add today, even though it doesn't get written about as much as it did a decade ago (https://www.quora.com/What-makes-Rails-a-framework-worth-lea...). It's possible that Scala's value add is as good as it once was, but the people who found that value add useful have long since stopped blogging about it.
I haven't used Scala that much. I like that it has a repl, I like immutability (which should be nice with Shenandoah GC), I like the fact that I can make a basic POJO (case class) without writing a novel of private fields, getters, setters, hash code, and equals. Scala definitely has some great parts that I wish Java had. Scala also has some hell that can make it harder to reason about (gratuitous operator overloading, implicits http://yz.mit.edu/wp/true-scala-complexity/).
Scala also came onto the scene at a time when Java didn't look so hot. Java 5/6 era Java (2004 & 2006) just didn't support the cool FP stuff that Scala did. Java 7 didn't come out until 2011 and still didn't have lambdas. It was 2014 before Java got a lambda. So Scala was this new language that ran on the JVM that had features! But maybe people didn't want all the features, but really only wanted a few of them like lambdas, streams, and the ability to make simple POJOs/case classes (which is still missing in Java, though things like http://immutables.github.io/ can help). I'm sure that there are many other reasons to use Scala. I'm not trying to argue anyone away from a language they enjoy using. The point I'm trying to make is that for many people, they might have chosen Scala for some of these simpler features that now mostly exist in Java. Other people might have been enamored with how productive that Scala made them at first and later decided that they were creating things that were harder to debug or for others to read and understand. Still others might not have realized that everything is terrible and there's no amazing new technology that's going to make them 100x happier than they used to be when they actually write real programs. Don't underestimate that last bit. Scala's been around and has plenty of people who have used it and not created anything in a shorter amount of time or more reliably than they had in Java, Python, or whatnot. . .but don't yet realize that everything is terrible. I'm not saying that languages don't matter. I think they do. Still, it's hard to find something that doesn't have huge terrible portions that will drive you nuts on occasion.
Scala isn't as shiny and new as it once was. A new language is always awesome before you have to debug what you write in it. Java 8 has implemented a lot of what people thought was missing from Java. People have dulled on "magic" (really just harder to follow code) that might be hard to discover how it works rather than easy-to-follow code. That doesn't mean Scala is terrible or anything, but it's advantage over Java is probably smaller for people who want some basic FP and the zeitgeist seems to be moving against languages that are more implicit or dynamic (regardless of whether that's happening in terms of LOC).
I think you hit on part of it which is that for a while Scala sold itself as a better java, and people ate that up, until they found out it wasn't.
Scala is not a better java, in fact it's a worse java (with better type inference). It has a worse IDE situation, compile times are slower, and it takes much longer to master.
Scala has been and continues to be my favorite language though, and I would absolutely use it for new projects along with building a company off of it if given the opportunity.
Scala's strength is not as a better java, but as a full fledged functional programming language that allows you to write pure, easily testable, easily parallelizable, robust software.
Rob Pike's statement is a reflection of the terrible mistake he makes by conflating application complexity with statement complexity. Any given line of Scala can absolutely be more terse and difficult to parse than a line of Go or C or Java, but that is often inversely proportional to the complexity of the entire application. Powerful languages mean less need for frameworks and less need for reinventing the same thing over and over. See Go/Generics for a microcosm of this idea.
All good points. Java 8 has captured a lot of things that people were missing, and new languages have sprung up that have tried to strike a good balance between terseness and complexity, with Kotlin currently being the new hotness in the JVM. Scala was in the right place at the right time, but its features lack orthoginality and coherency.
I get this feeling when looking at the OO and functional styles continuously clashing. Implicit classes make some reasoning about functionality non-local. There's a horrendous amount of complexity in the OO side: case classes, traits, mixins, and a complex type hierarchy to boot.
Most languages just don't have that many features, and you could take different, non-overlapping subsets of features have something that could have an idiomatic style of its own and is good enough to solve most problems tersely.
Thing is, a lot of these nice things of Scala can you now get with Kotlin - with better IDE support, better readability, better compile times and more seamless interoperability with existing Java libraries.
Scala got a boost when it was just getting started by being a "Better Java". Now it's less able to stand on that value proposition and its losing supporters.
Thank you for taking the time to type up such a detailed response. I read through it more than once.
I agree with most of your points, but I still believe that Java != Scala, at least not yet. I think Scala still has a lot to offer that differentiates it from plain Java.
Using another language that has tooling problems & slow compile speeds, I can understand why that alone would make you not want to use it anymore.
After a certain point all of the advantages of the ecosystem starts going away if compile/indexing speeds are not good and tooling doesn't work.
Golang was designed from the start to make tooling and fast compile speeds a first class citizen from the start. It has a lot of decisions that make sense when your working with large teams.
Compile times are a problem, yes, but tooling is not an issue. sbt is a solid build tool in my opinion.
Regarding the compiler, the Scala team has that as their number 1 priority right now. There is major work being done on building the compiler from the ground up [1]. Also, there is scala-native, which aims to provide AOT compilation for Scala code [2].
Scala vs. Go is an interesting comparison, but I think that they are each targeting different applications. There definitely is some overlap though. Go tooling is pretty good, but I don't see the hype. Maybe it becomes more clear if you're working on a large Go codebase? I don't know.
I come from the swift world, where a lot of their scala issues resonate with me. The article also mentioned tooling being an issue for them.
They have a lot of decisions that make a lot of sense in large code project context, which google has plenty of. Stuff like KISS, fast code speed, standard formatting & build tooling and so on.
I haven't worked with either language, so take what I say with a grain of salt.
From the start they decided to ship with a standard formatter / linter for example. A standard cross platform build system, optimized for build speed from the start, etc. I think a better word would of been a 'top priority'.
That is not a lot of tooling, actually. I am thinking about IDE integration, static analysis, code generation, package management. What are aspects of the tooling experience I am missing?
> As a whole, Movio hosts a much broader and diverse set of opinions, so the “we” in this post accounts for Movio Cinema’s Red Squad only. Scala remains the primary language for some Squads at Movio.
I understand what they mean when they write:
I think the first time I appreciated the positive aspects of having a strong type system was with Scala. Personally, coming from a myriad of PHP silent errors and whimsical behavior, it felt quite empowering to have the confidence that, supported by type-checking and a few well-thought-out tests, my code was doing what it was meant to.
There are times when I appreciate the strict type-checking that happens in Java. I do get what they mean. But there are also a lot of times when I hate strict type-checking (in particular, when dealing with anything outside of the control of my code, such as whimsical, changing 3rd party APIs that I have to consume (for some business reason), or even 1st party APIs that feel like 3rd party APIs because they are developed by another team (within the same company) or for some reason we can not fix the broken aspects of some old API that was developed in-house 6 years ago.) Because of this, I have become a proponent of gradual typing. If I am facing a problem that I have never faced before, I like to start off without any types in my code, and then, as I understand the problem more, I like to add in more contract-enforcement. This is what I attempted to communicate in my essay "How ignorant am I, and how do I formally specify that in my code?" [1]
I think everyone who works with Clojure sometimes misses strict type-checking. Because of this, there have been several efforts to offer interesting hybrid approaches that attempt to offer the best of all worlds. There is Typed Clojure for those who want gradual typing, and there is more recently Spec. Given what I've written, you might think I am a huge fan of Typed Clojure, but I've actually never used it for anything serious. The annotations are a little bit heavy. I might use it in the future, but for now, I am most excited about Spec, which I think introduces some new ideas that are both exciting for Clojure, and which I think will eventually influence other languages as well.
Do watch the video "Agility & Robustness: Clojure spec" by Stuart Halloway. [2]
I also sort of understand what they mean when they write this:
No map, no flatMap, no fold, no generics, no inheritance… Do we miss them?
There are times when we all crave simple code. Many times I have had to re-write someone else's code, and this can be a very painful experience. There are many ways that other programmers (everyone who is not us, and who doesn't do things exactly like we do) can go wrong, from style issues such as bad variable names to deeper coding issues such as overuse of Patterns or using complex algorithms when a simple one would do. I get that.
All the same, I want to be productive. And to be productive in 2017 means relying on other people's code. And, in particular, it means being able to reliably rely on other people's code -- using other people's code should not be a painful experience. Therefore, for me, in 2017, one of the most important issues in programming is composability. How easy is it for me to compose your code with my code? That is a complex issue, but in general, those languages that allow for high levels of meta programming allow for high levels of composability. Both Ruby and Javascript and Clojure do well in this regard, though Ruby and Javascript both have some gotchas that I'd rather avoid. In all 3 languages, I find myself relying on lots of 3rd party libraries. I use mountains of other people's code. Most of the time, this is fairly painless. But there are some occasionally painful situations. With Ruby I run the risk that someone's monkeypatching will sabotage my work in ways so mysterious that it can take me a week to find the problem. And Javascript sometimes has the same problem when 3rd parties add things to prototype, perhaps using a name that I am also using. I so far have had an almost miraculous time using Clojure libraries without facing any problems from them. It's this issue of composability that makes me wary of Go. While I sometimes crave a language that simple, I can't bring myself to give up so much of modern languages best features.
[1] http://www.smashcompany.com/technology/how-ignorant-am-i-and...
[2] https://www.youtube.com/watch?v=VNTQ-M_uSo8