No mention seems to be made of how long Spencer was at Google.
As for the "pathological love of Java", Spencer has a distorted view. This will vary from project to project. I'd say C++ is more pervasive although Java may well dominate applications (rather than infrastructure) code.
Also there are lots of people who use Python but ultimately its production use is limited.
A common theme here seems to be that as an engineer you are discouraged or disallowed for doing things that are too "clever". This is something I approve of. Other people need to maintain your code after all.
For example, the abusage of automatic semicolon insertion that the Twitter devs behind Bootstrap seem to love (as some kind of "see how smart I am" display) would never survive at Google (our style guide expressly forbids excluding semicolons).
Anyway, sorry it didn't work out. Good luck, Spencer.
GWT takes the (very large) pool of capable Java programmers and magically enables them to become capable web programmers without mastering cross-browser subtleties. It's all about compartmentalization. For any/all its faults, that's an astoundingly immense value.
It depends what you mean by "capable". It's more accurate to say it takes large pool of Java programmers and lets them write slow and bloated UIs with canned widgets that are subtly (or not so subtly) inappropriate for the task at hand.
Google should be producing best-of-breed UIs, not ones that are easy for the unskilled to write. It's not like Google can't afford to hire the best.
A main reason for the slowness is that GWT obscures the boundary between the client and the server. People end up writing things like:
for (x in hugelist) {
doSynchronousCall()
}
without realizing it. And then people test it on fast internal network connections and they don't realize what a poor experience results for average users/mobile users.
Another problem with GWT is that the JavaScript world has moved extremely fast in the 4 or 5 years since GWT was developed -- and most programmers don't know how to interface GWT with native JS (and that would defeat the "purity' anyway). So they're locked out of that functionality.
No matter how big the GWT team is, it can't keep up with the rest of the web advancing the state of the art in JavaScript.
I've also heard from people working on the Wave codebase that GWT was a big part of the reason they were locked into a particular design and couldn't iterate. For some reason Java just encourages people to dump out mountains of code. Maybe you can get away with that on the server, but when generating JS it just leads to UIs that crash the browser.
Maybe you shouldn't comment on GWT if you don't know anything about it. Pretty much everything you state about GWT is wrong.
1) GWT does not emulate synchronous APIs. That would incredibly expensive to do, requiring the implementation of continuation emulation or CPS transformation of the entire program.
2) GWT does not advocate "shielding" the developer from DOM or CSS. You are free to use the widgets, or not, just as you are free to use JS libraries with widgets, or not. The purpose of GWT isn't to hide the browser.
3) Most GWT programmers know how to interface with JS. I have not encountered any non-trivial GWT app that does not contain at least some portion that makes calls to JS.
4) On Javascript evolution. Sorry, Javascript has moved glacially slow. What has moved fast is the HTML5 API bindings. When ES6 arrives and GWT doesn't support it, then you have a point.
5) GWT is open source. It doesn't matter how big the GWT team is, it only matters how big the community is.
6) GWT UIs don't "crash the browser". Google derives the majority of its revenue from AdWords and AdSense, which are GWT, and if they were crashing customer's browsers like you say, I'm pretty sure Google would be in trouble.
GWT is a compiler. You can write bad code, or you can write good code. In 2009 I ported jQuery to GWT. The resulting code was smaller than jQuery, and faster than all other competing libraries at the time. See the result here: http://www.youtube.com/watch?v=sl5em1UPuoI
I don't have anything against JS, and I don't think GWT is the right tool for every project, but I don't think you should respond to threads like this and make what appear to be authoritative statements which are obviously and completely false (synchronous calls)
As mentioned below, the point about synchronous calls was made based on seeing 30+ second latencies in more than one GWT app and opening them up in firebug, and seeing a crazy number of roundtrips (with many of them serialized). With async APIs people can somehow still structure their applications so that logically many full round trips are needed to render the page.
I'm not necessarily saying that's GWT's "fault" -- the claim is that the developers writing GWT don't understand what they're writing. I said essentially what I said here to a tech lead of a GWT app and he's like... that's EXACTLY the problem we're having. This happened to be an app I've never even used, but he was shaking his head in agreement.
I think GWT was written by skilled engineers who had written tons of JS and got sick of the repetitiveness and the primitiveness of it in 2007 or whenever. But the claim of the OP: "GWT takes the (very large) pool of capable Java programmers and magically enables them to become capable web programmers" is not true. Empirically, people who've never written JS are writing GWT, and they are writing terrible web apps.
I guess you can say that tools create apps with a "smell". PHP creates web apps with injection security holes. C creates apps riddled with dangerous and costly buffer overflows (or used to before crazy amounts of compiler/library work). I'll take a dig at my favorite and say Python creates apps that dump stack traces on invalid input and don't handle signals correctly.
GWT's smell is creating bloated and slow apps now with bad UIs. I was pretty surprised at the number of upvotes I received, so I'm not the only one that smells it. Like C and others, it's possible that people will derive so much value the tool that they eventually learn how to overcome the pitfalls.
I smell selection bias. Have you never visited HuffingtonPost, Gizmodo, or PandyDaily and looked at the network tab? Javascript does not automatically make people produce fast and snappy sites. That is based on experience, regardless of language. There are tons of websites that make a boatload of script include requests and image requests because the developers haven't learned about UglifyJS or Closure Compiler yet, or don't know about CSS spriting. Take a look at flights.google.com, it is a GWT app of non-trivial functionality that loads faster than most comparable sites (<2 secs) and makes far fewer requests. Another good one is the SpeedTracer app for Chrome (it runs as an extension). It has a very very nice UI, better than Firefox and WebKit WebInspector, and it's a GWT app.
Anyway, why do you think Google does all of their hand-coded JS apps with Closure and the Closure Compiler? It's not because JS 'out of the box' is conducive to producing best-of-breed apps. Web programming requires a large amount of on-the-job acquired knowledge. How many people are aware of CSS selector performance, or the performance effect of reading element.offsetWidth?
GWT doesn't prevent you from shooting yourself in the foot, just like any other language. Producing optimal apps in any language requires experience in that language. Knowing how to arrange non-blocking behavior in script inclusion, or batching requests, is something that comes with experience. I wish I could produce a boilerplate framework that could make amateurs produce absolutely optimal apps out of the box, but that would trade off a lot of freedom.
However, GWT does offer tools to reduce your HTTP requests, and it's been doing this for years before they became common place in JS. For example, GWT has had automatic CSS spriting and image optimization since 2007. It's been doing CSS optimization nearly as long. GWT 'perfect caching' has been in from the beginning, which guarantees that most of the time, the app is loaded via a single HTTP request to a small 2k bootstrap file. In fact, it's possible to load your initial page: html, js, css, images in just 1-2 requests if you desire. Obviously, there are tradeoffs there too, that are platform and app-dependent.
There certainly are some APIs in the GWT SDK that can produce bloat if you use them, and that is wholly dependent on your preference. The GWT RPC system, for example, is great for internal apps, because it makes server communication trivial, but GWT also includes regular JSON/JSONP/XHR support. You can choose to use Widgets, or you can built your UI with HTML templates and CSS, just like you might do with Handlebars in JS.
I guess my point is this: I don't deny that an amateur Java programmer could take GWT and unknowingly compile a huge amount of junk into their app or make an insane number of HTTP requests. I just don't think producing apps with terrible startup latency is GWT specific, and I've seen enough websites in JS that are absolutely terrible to know that the way you get an optimal UI is by hiring good engineers with web experience, not by language choice.
And on that note, it's quite easy to get upvotes IMHO when you start a language war, because people are so opinionated on one side or another about it. My own personal opinion is that you should use the tools you feel most productive in, period, and if that's JS, or GWT, or CoffeeScript, or Objective-J, than more power to you.
> Javascript does not automatically make people produce fast and snappy sites.
He never claimed otherwise. His point seemed to be that if someone just picks up GWT and whips up an application, it's likely to exhibit some (severely) suboptimal behaviour here and there.
I'm going to agree. I have not written any GWT code, but I have heavily read a certain internal codebase that's a GWT app, and most of the time reading has been spent thinking, "wow, that's awesome". I think GWT's abstractions are excellent; the Java reads extremely well and generates code that makes interacting with HTTP APIs transparent. (Specifically, code reads like you are writing a command-line UI with in-memory dependencies, but is actually an async web app that gets all its data from various APIs. I think that's cool.)
Like all abstraction layers, this could be good or bad. But there are definitely GWT features I wish I had about 5 years ago when I was writing web apps for a living :)
> GWT UIs don't "crash the browser". Google derives the majority of its revenue from AdWords and AdSense, which are GWT, and if they were crashing customer's browsers like you say, I'm pretty sure Google would be in trouble.
I use Adwords quite a bit, and it is buggy. I probably have to refresh every 10 minutes or so to get the buttons to work again. It is an absolutely massive application with more features than all of Google Docs combined, so I am not surprised. The bugs can be frustrating, but it is not like there is an alternative place to buy ads on Google's search pages. Google will be fine no matter how many bugs are in Adwords.
Adsense is quite simple, and works well.
That being said, Adwords has been getting much better.
OK I guess I'm misinformed -- all I'm saying is that I've opened up a bunch of GWT applications in firebug / chrome developer tools.
And I see lots of multi-second latencies and dozens or even hundreds (!) of round trips to the server. Which makes it clear to me that the developers don't understand what they are actually writing.
I'll admit to being wrong on that one point, but I was upvoted like hell so I guess other people have the same impression of GWT that I do.
Why is this being downvoted? Calls to the server to update state do not have to be synchronous and it's fairly obvious that this is a mistake the parent poster is making in his obnoxious claims against GWT.
Having used GWT for a large web application it is hard to say much positive about building web client applications in Java. The GWT compiler technology and debugger support are great... but the flaws and rigidity of the Java language are so deep that quickly it becomes frustrating. Yes you get the syntax and semantics of java which may be a benefit for certain developers to quickly produce web apps but the abstraction is leaky and the nature of the language is unsuited for use in the browser (or GUI programming in general).
I think the results speak for themselves - anyone care to name a GWT application from google with a world class GUI?
Flights is a bit thin on, hotelfinder is better but on my iPad here it's got that typical clunky google UI feel - and to be fair they aren't a design or GUI strong company - but it's clear they are making steps to address it.
Good point - Angry birds is an excellent counter example and fair use case for GWT. It's not what I'd call a GUI app - I was more thinking Forms, Document interaction (word, excel, illustrator, photoshop).
I see a strong use case for Java on GWT, emscripten et al for apps like games that are a universe to themselves, using a very narrow surface area (canvas, ogl) of the browser application interface.
Where these abstractions fall down for me is the friction at every interface interchange. In java case that's callback friction (brievity matters to me), the heavy burden of types on variables versus values, a verbose and inarticulate typing system, and like C/C++ and many Algol derivatives a complete paucity of expressive Literals for expressing all data types that can be modelled by the type system.
Yeah, I read his list of cons mostly thinking "gee, this sounds like an environment I'd enjoy". What look like cons when you're younger often end up being pros once you've been around the programming block a few times and inherited code that should be featured on the Daily WTF.
I've been around the programming block a few times myself and inherited some perfectly horrible code, and his list of cons makes me shudder. But people differ. I've (purposefully) worked at smaller companies where I had more influence over the technical direction. I think a good amount of his cons list come from the size of Google, and all that large size brings with it. Many of those things are probably needed in a large enough company, to prevent the very things you seem to think they'd bring (Daily WTF time!) But in a small company with decent people, those rules and attitudes would be overly restrictive.
A common theme here seems to be that as an engineer you are discouraged or disallowed for doing things that are too "clever". This is something I approve of. Other people need to maintain your code after all.
"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." – Brian W. Kernighan
Of course, if you can debug it, it means you could be solving a harder problem ;). And harder problems are more fun, even if you can't quite get the solution to work.
That's an awful game plan. If you can't debug it, chances are you aren't going to be solving it, unless you reliably write bug-free code on the first try.
Apologies for judging people on their resume, but that linkedin profile shows a series of job, each of them lasting less than 1 year. That's a bit of a concern. Together with a post that I subliminally read as "if you don't adopt functional programming you are a loser", I'm tempted to look for more grounded opinion on the matter of Google's technical choices.
That being said, if you understand functional programming, you have my respect.
A trend I've seen is young & extremely smart people making posts saying "you are doing it wrong, you should use lisp or $whatever_language_they_like".
What I haven't seen is a lot of posts from these same people (have seen them from others) where they are talking about maintaining code, bringing new people in who can fix bugs, add features.
I think the natural progression for smart people is
- wow, I'm dumb, I'm so dumb, I need to learn.
- a few weeks/months/years of learning
- holy crap, everyone is doing it wrong! They need to use $New_Whiz_Bang! It's so cool, it's objects or functional or has $feature_I_really_like, why aren't they using it?
- time passes. A lot of time. Maybe a decade.
- Oh, my, legacy code sucks. I don't understand what $programmer did 5 years ago. Holy crap, $programmer is me!
- Wow. Customers suck. They want this code to work and $cool_guy embedded a lisp interpreter in the middle of our database and wrote a bunch of cool lisp that I don't understand. And the customer wants it fixed yesterday. And my kids and wife hate me because I work too much.
- I really wish everyone would program their code like it was intended to be understood by an idiot.
My take? Writing code is easy. Fixing it is hard. It's easier if you weren't trying to be clever when you wrote it.
I can't comment on Lisp (except to say that Scheme is awesome), but I think you are very far from the mark on functional programmers in general.
In particular, Haskellers worry about maintainability far more than normal programmers. One of the main advantages of functional programming is that it makes reasoning about the code--including ensuring correctness--much easier.
Haskell enforces referential transparency, limits IO interactions with the type system, allows you to encode some fairly complex invariants in types and, despite all this, has some of the best testing software around in QuickCheck (which goes beyond unit tests).
In my experience, Haskell programs tend to have smaller, less coupled parts with better defined invariants. You don't have to worry about a large class of errors right off the bat: you can't get null pointer exceptions, functions you call can't have any unexpected side effects, you won't get type errors at runtime and so on.
If anything, Haskellers worry about correctness too much. Happily, this is offset a bit by the language which happens to be very expressive and productive. You can also get very good performance from high-level code and even unoptimized code tends to be decently fast. Some people also find it hard to learn, but that's simply a cultural issue. Somebody already familiar with functional programming has much less difficulty picking it up, but even without much functional experience the basics shouldn't take too long.
In particular, Haskellers worry about maintainability fra more than normal programmers. One of the main advantages of functional programming is that it makes reasoning about the code--including ensuring correctness--much easier.
This was what finally convinced me to become proficient in functional programming. I'm constantly amazed at how stuff just works.
So you want the code that makes your hands free car to be written in Haskell?
I can just hear the answers: "Why, yes! Because I can prove that it is correct!"
Now consider that you are going to get bored at GM and move on. Someone a lot less talented than you is going to go in and modify your perfect haskell code. They don't know haskell, they aren't you, but the code needs to be enhanced.
Now how comfortable are you in that car? (I know what you will say, but as an experienced manager, I would not be comfortable in that car.)
I'm not sure I would want to be in a car with code written in any language by somebody who couldn't learn Haskell (or any other language, for that matter) in a couple of weeks, at least well enough not to mess something up without noticing.
That said, if I had to be in a car with code written by somebody who didn't know a language, I would much rather that language be Haskell than C (to pick an extreme example; replace C with most any other language and it would be true to a smaller degree). It's much harder to sneak a bug through Haskell than pretty much any other language and it's easier to have more thorough testing. Somebody new is very likely going to have trouble getting code to compile at all, but the would also have trouble adding subtle (or even unsubtle) bugs to it.
To continue with the extreme example, there is a whole host of C bugs which are easy to add without realizing but are practically impossible with Haskell: memory issues (having the occasional segfault or bus error), concurrency issues (these can be very subtle and hard to catch), mutation in the wrong place (could also be very intermittent), potential type issues and so on.
If the language was C, I wouldn't want anybody without a lot of experience working on the code at all--there's just too many very subtle ways it can fail without being easily detected.
Even languages that make is harder to shoot yourself in the foot make it easier than Haskell.
There is something to be said for hard things looking hard. Personally, I'd rather that code be in Haskell than Java, because there are plenty of people with room-temperature IQs who think they can code in Java but would get beaten up and have their lunch money taken by something like Haskell.
I've heard about some really interesting Smalltalk systems, super high-reliability stuff. I'm sure SmallTalk helped some, but what I suspect helped more is that the kind of people who were off-the-beaten-path enough to have years of SmallTalk experience were well above average.
He finished school in 2009 so most of the early ones seem to be internships or part time. He has had 3 jobs since 2009, at Google, LivingSocial and Social Media Networks. All of those places have very good reasons to leave. I have no problem with judging someone based on their resume (I'm a recruiter), but I genuinely don't think there is an issue with this one at all.
that linkedin profile shows a series of job, each of them lasting less than 1 year. That's a bit of a concern.
Can someone please explain why this is a concern? I've also had a series of jobs lasting usually less than 18 months. Whenever I witness people judging someone as a "job hopper", I wonder why they do it.
The only thing that comes to my mind is the idea of "loyalty", but just looking at dates won't give you any idea of why the I changed jobs as often as I did or whether I made sure before leaving that the rest of the company could pick up my work.
If anyone can share some deeper insight or a different point of view, I would be most interested in reading about it.
Sure. When I'm in hiring mode and see a resume like that, a few things come to mind.
First is the cost of hiring. Filling an open position is a lot of work. Sifting at resumes, initial phone screens, three kinds of interview, hiring paperwork, equity paperwork. The longer someone stays, the bigger the denominator on the cost/benefit ratio.
Second is the cost of bringing somebody up to speed. We work in a very specific way (continuous deployment, test-driven development, collective code ownership, pair programming) and learning it takes time. Even if you know all that stuff, the new person still has a ton to learn about the product, the history of the product, and our plans for the future. Again, the cost/benefit ratio improves the longer somebody stays. Especially so if we take a risk on people by, say, hiring somebody who doesn't know our language or our toolset.
Third is the social disruption. When somebody leaves a small company, it has a noticeable effect. That cost is hard to quantify, but it's definitely noticeable.
The fourth thing that comes to mind is something subtler. I think there's at least some correlation between how long people stay and other personality characteristics. For example, if somebody has never held a job very long, I don't have any reason to believe they have the capacity to stick through difficult circumstances.
And the last item is a suspicion of subtle flaws. I'm sure that some people have had a lot of jobs for reasons that are unrelated to them. But then there are the people who are better in the interview than in practice, so they eventually get pushed out. Or the sort who are on their best behavior at the start, but eventually let their personal issues all hang out. Or the sort who are a little grating to start, and end up a lot grating after you've worked closely with them.
So whenever I see a resume like yours, it definitely loses points. It doesn't disqualify somebody outright, but all else equal I'll always take somebody who has demonstrated they can stay at a job.
Truthfully, in some areas of the country doing consulting, not moving every 6 - 18 months is seen as a problem. You have a tougher time finding the next gig if you stay at a place for multiple years.
Am I the only person who finds it creepy that instead of offering evidence that speaks to his specific criticisms, you instead go for the google stalking to discredit him as a whole?
10 months is long enough to have a solid perspective on at least the groups you work in. His quitting is also a sign he takes his principles seriously. If anything I think that credits his criticism.
Agree with what you said, but also think that Spencer's views on Java echo my thoughts on Java after having worked with a dynamic language for some years now after having worked for more than several years in Java. Java is too verbose for its own good, period. It has its place, but it sounds like Google needs to embrace change.
A lot of the "cons" in the technical section just sound like a large engineering organization making pragmatic decisions. I can't think of any company working at that scale that's successfully integrated a non-trivial amount of functional or logic programming. As much as hackers might not like it, there are reasons that coding at that scale is almost always a matter of banging out a lot of imperative code in a boring language like Java.
And I'm definitely not the only one that appreciates Go precisely because it doesn't play the Scala game of integrating every single academic PL research feature of the last 20 years but instead tries to be as practical and simple as possible.
When I squint at Go's syntax, underneath the C-like facade I see:
* higher order, first class functions
* lists (squint at the slice a bit; also chans)
* maps
* flexible structs (curiously, you attach "methods" to them much as you attach functions to Haskell typeclasses).
If anything, driven by pure pragmatism for the sake of concurrency and distributed computing, Go actually does implement really awesome functional primitives. However, it's built to be familiar to C and doesn't talk about it much.
Personally, I love the OP's response to your last comment:
"I hate Scala with a passion, for what it's worth. I just wish there had been more advocacy for it."
I wouldn't characterize myself as hating Scala, but I understand where he's coming from on both points. There are reasons that this static typing fan tends to prefer Clojure to Scala on the JVM. Also, it's just a beautiful quote.
Characterizing Go like that would be too harsh. It certainly does integrate more experience than research, which is why Go is almost as often explained in terms of what it doesn't do as for what it does.
Go is not a research language (e.g. Haskell), it is pragmatic. I suspect a part of the design decisions of Go factored in training for people using that language. At a certain point, features that are alien cost more in training costs than they will gain in "elegance" or whatever.
I don't know. Maybe it's because I've been mostly looking at Haskell and OCaml recently, but most of the more "researchy" features and languages I've seen aim to be easier to reason about.
Unlike most languages (including Go, I believe), researchy languages tend to have very well defined semantics based on well-understood math--really going out of the way to be easy to reason about!
That's what I like the most about it. All the context you need to understand a block of code is right there, but it's still less verbose than something like Java.
We all acknowledge that we spend more time reading than writing code but for some reason we still design languages that make writing it the priority.
As a peripheral observer and user of Scala, I feel like it gets a lot of things right if you want to use it as a "better Java". But that doesn't mean it's particularly usable as a "better Java"; obviously reasonable people can disagree but personally I find it uncomfortable to write Scala code for too long. It feels like it leads to awkward structures (blocks within blocks within blocks, hard-to-follow but oh-so-functional transforms) and an odd sort of terseness that reminds me of Python (not a compliment, I find Python irritating to read even though I like a lot of the ways it does stuff). It's like there's so much stuff there that it encourages you to be way too clever. "Too clever" is a thing and a bad one; somebody (maybe even you) has to puzzle out your cleverness later.
I do find cases where the language actually does cause problems, though. There appears to be three main groups involved with Scala: PL academics, "better Java" people, and "why isn't it Haskell?" people. The last group is, to me, rather frustrating, because it's the group that gets up in arms at the idea of a 'for' statement in the language. (The standard method of iteration is a foreach method over a collection and it is insanely slow; people resort to writing while loops because the foreach construct is so laughably slow. Why am I faking a for loop in such a "modern" language?) There are a few areas like this where practicality has apparently lost out to zealotry and it makes the language feel vaguely hostile to somebody who already Does The Right Thing in Java (immutable-by-default, dependency injection rather than creation, etc. etc.). As silly as it may sound, when I'm writing Scala I don't feel welcome in my own code--and I've been using Scala off and on for a couple of years, it's not just fear of the unknown.
I'm actually really looking forward to Kotlin (and Ceylon, to a lesser extent) because it's a language with a focus. JetBrains knows where they want to go with it instead of just dumping a box of stuff on your lawn and telling you to piece together what might or might not actually be what you want. The comparison always reminds me of The Big Lebowski: "Say what you want about the tenets of National Socialism, dude, but at least it's an ethos!"
no, in 2.9 using a 1 to 10 generator and a foreach was slower than using one using primitives due to the boxing. This is fixed in trunk.
> gets up in arms at the idea of a 'for' statement in the language
Scala's for is an expression, you can use it as a statement by not yielding anything if you want. Scala's for is extraordinarily versatile, basically allowing you to work within any monadic context (where a monadic context is some container that implements map/flatMap and some other optional methods.
These "I can't believe its not Haskell" people tend to want to stay with the realm of FP, which is a very useful place to be. Minimising and tracking side-effects and having more expressive and useful types makes more composable software. This is not at all academic, it is an entirely practical result. It does however require you to rethink your approach to writing code, and it really helps to learn a few seemingly difficult things like the typeclassopedia (Monads, Monoids, Functors, Applicatives etc).
Personally I find that the Scala community is generally consolidating around functional approaches, and that this is a very good thing. It is a step beyond a "better Java" and is perhaps more initially intimidating, but it is definitely worth the effort.
no, in 2.9 using a 1 to 10 generator and a foreach was slower than using one using primitives due to the boxing. This is fixed in trunk.
Last I checked (and this may be stale because I've basically given up on Scala except in trying to get the Play Framework to not suck, high-performance code isn't really necessary where I'm using Scala now), foreach over a collection emitted very ugly bytecode that was notably slower than a while loop using Java ArrayList<T>.get(). I'm not aware of that having changed, but I could be wrong.
.
These "I can't believe its not Haskell" people tend to want to stay with the realm of FP, which is a very useful place to be. Minimising and tracking side-effects and having more expressive and useful types makes more composable software. This is not at all academic, it is an entirely practical result. It does however require you to rethink your approach to writing code, and it really helps to learn a few seemingly difficult things like the typeclassopedia (Monads, Monoids, Functors, Applicatives etc).
Thank you very much for the lecture you decided I needed, but as it happens I am a teensy bit familiar with the concepts of functional programming. The more pure forms of it are uninteresting to me except as intellectual exercises. There are very valuable lessons to be learned from FP, but I do not find, and have not found, the pure-FP approach to be the panacea of which it is generally claimed. In particular, the loud noises of the pure-FP crowd about scary side-effects has generally been mystifying to me because it really is not a problem I generally encounter in the code I write now. While it may strain your credulity, somehow this half-blind OOP-using simpleton manages to cobble together comfortably composable code with the meager tools available to him and, perhaps even more shockingly, it doesn't involve one single line of XML. =)
I would agree in a heartbeat that I am a better programmer overall because I have been exposed to functional languages. Obviously, many of the lessons from functional languages lend themselves beneficially to procedural/OO programming: immutability (for obvious reasons), function composition (Haskell's functor laws, as simple as they may be to a FP devotee, are something that often come to mind while I'm designing an API because so many implementations don't compose cleanly!), and careful choice of when and where to exercise side effects (which are, really, the interesting part of 99% of code). But I find the "let's make Scala into Haskell" people way, way overboard (I don't find the morass of monads and applicatives to be doing anything useful) and I generally find their style of programming aesthetically displeasing.
Please don't misunderstand me--the "functional people" are welcome to do whatever they'd like, of course, and I wouldn't tell them to write code the way I choose to! But the post I was responding to was under the "Scala is a better Java" impession, and I think this definitely qualifies as a reason that's not true.
.
Personally I find that the Scala community is generally consolidating around functional approaches, and that this is a very good thing.
I'm glad for you. Different strokes for different folks. And if they want to do that, cool. I've made my peace with that and am sort of hanging my hat on Kotlin and Ceylon as the next place to go from here.
.
It is a step beyond a "better Java" and is perhaps more initially intimidating, but it is definitely worth the effort.
There's no "definitely" about it and while I don't mean to pick on you, that sort of zealotry is exactly why I personally think Scala's future is being that guy in the corner complaining about how crude and unenlightened everyone else is and why aren't they using this thing I like?
I would totally agree with the idea that it is worth knowing a functional programming language or two so you can make a reasoned judgment about where to apply functional principles to all of your code. I would never agree with the idea that it is "definitely worth the effort" to use them in all cases because it's silly on its face. For me, going whole-hog on FP isn't worth a thing because it makes me much slower (and this didn't improve with practice), leads me to write code that I have a lot of trouble maintaining later, and leads me to write code I come to despise because the end result is code that I personally and subjectively find really fugly. So, for me, it's not worth the effort. I know, because I've tried it and found it wanting.
To be honest, I'm a little bit frowny that you'd evangelize it so dogmatically without pausing to think that maybe the person you're talking to does know a thing or two (the automatic assumption that disliking the FP zealots means I don't know anything about it is...interesting) and that it doesn't seem like you've considered the possibility that it isn't all-encompassingly wonderful for everything.
Yeah, this is really a consequence of Scala trying to do much shit for you automatically with the map function. See, it's not just a map function, but also an auto-conversion function from any generic sequence to any other generic sequence.
What the hell does that mean? Okay, so basically, take your standard map function. Run it on the structure you're mapping over. Great, now you've got your updated structure! Now, you don't just want to return the same type of structure you just had, you also want it to be a different structure, but with those modified elements you had for its contents. Now build that new structure from the thing you just modified (Scala may actually optimize the traversal and construction of the new collection into one pass, although I am not positive). And there's your return value.
It actually gets more complicated due to the Liskov principle and the way that Java handles type erasure, but thinking about that shit just makes me fuckin' tired.
You want to know what the map type signature should look like if the Scala guys weren't going overboard on trying to make things seem outwardly simple?
class Functor f where
fmap :: (a -> b) -> f a -> f b
It does indeed require some explanation to understand why this signature was chosen over a "simpler" signature that might have provoked less mockery from the interwebs. But it's not really that difficult, and it results in a collections library that's better than C#, Haskell, or Python. And I don't know about you, but I use collections a lot.
Well, if you're using Java, you don't have map in the JDK. You can use Guava (or write your own) to get Iterables.transform(). But then, anything you map turns into an Iterable. So often you have to iterate over the whole thing once again to get the data structure you actually need. You could write Collections.map(), Lists.map(), Sets.map(), etc, until you get tired. (Guava does not include these.)
But at this point you're still way behind Scala. What about flatMap, filter, slice, drop, scan, etc? Every collection in Scala has a huge set of methods that work on it. But they're not implemented again and again, because the type system is powerful enough to prove that the generic code that works for Set can be used for List. In contrast, every Java program is full of innumerable primitive operations expressed again and again, so you have to keep reading the steps of every for loop to determine, "oh, this one is doing forall. This one is map" rather than simply reading "forall", "map". Which is why, thanks to the above complexity, Scala can be much more readable than Java.
What about Haskell? The existing standard library is fairly weak on collections. There are not that many implementations, and there's no typeclass that quite captures the idea of a collection: you've got Foldable, Functor, Monad, and possibly some others, which all get slightly different aspects of it. Small typeclasses are nice in that they allow more instances, but often you need some functions that aren't there. There's no typeclass which captures the size() function, for instance. That's mostly a library design issue, which is more tractable than a language issue, but still pretty hard to fix.
There's one additional cool feature of Scala collections which I haven't touched on yet: transformations can result in collections that are different from the original collection type, and the most specific collection that is compatible is used. What does that mean? Say I map a TreeSet from an ordered element type to an unordered type. The result type can no longer be stored in a TreeSet, so you get a more generic Set collection instead. I'm curious what it would take to implement this in Haskell.
So ignoring Java and concentrating entirely on expressiveness & ignoring readability, maintainability, etc... It seems like Collections are the entire kitchen sink of algorithms that map across iterable content. How is that better than more precise types (functors, monoids, monads, foldables)?
For example: what is the benefit of reducing these data structures (and associated algorithms which operate on them) to more generic versions?
> There's one additional cool feature of Scala collections which I haven't touched on yet: transformations can result in collections that are different from the original collection type, and the most specific collection that is compatible is used. What does that mean? Say I map a TreeSet from an ordered element type to an unordered type. The result type can no longer be stored in a TreeSet, so you get a more generic Set collection instead. I'm curious what it would take to implement this in Haskell.
While I agree that this is "cool" that you can do that, I really don't see the use of it except when sketching out ideas when you're not sure what kind of data flow you want yet... but that seems like a problem with designing the program and should be avoided.
In Scala 2.7 the signature for map, defined in Iterable[+A], was
def map[B](f : (A) => B) : Iterable[B]
That seems fairly close to the Haskell signature, although somewhat less readable. However, in Scala 2.8, things got more complicated with the addition of the implicit. Consider two functions that I might apply to a collection of integers:
Nothing surprising here, we provide a method A->B and use it to map from Set[A] -> Set[B]
Now lets use the optimized BitSet collection, which efficiently stores integers:
Notice that the collection type is BitSet for the function with return type of Int, but plain Set for the function with return type of String. This functionality is brought to you through the implicit parameter.
i've never used scala, but it seems to me that that signature is simply ensuring that the type of map is as flexible as possible, and that actual specializations will be inferred by the compiler without the user having to actually specify any part of that.
A few days ago I was at a conference and there was this Google recruitment desk, with gadgets and two nice guys.
I walked near the desk and said: "Hi, can I have a t-shirt for my son?", and they said sure, asking me to compile a form in an Android tablet.
Basically it was a list of programming questions, and out of 5 questions I was able to reply only to three. One was the time complexity of heap sort, that's pretty obvious, another one a probability theory quiz about trowing balls into bins, also pretty reasonable. Another one I can't remember, still pretty general.
Then there were this two questions I was not able to reply: one was about Java argument passing conventions in some specific kind of method I can't even remember (I can't write "Hello World" in Java). Another was about graph theory and adjacent Matrix.
Well, my idea is, that a big percentage of Google problems are due to this kind of hiring process. It's good if I know the fundamental algorithms as a programmer, but why I'm required to know Java as it was a pre-requisite to be a good programmer? About graph theory, it's something that is rarely used, if I need to do something I can grab a book and check it, it's strange to remember this stuff when you don't use it a lot.
Now hiring only guys that know exactly the Java calling convetions AND graph theory means to apply a big filter between the programming world and your company. This filter is good as it's cool to know a lot of things, but it's also bad because, for instance, a lot of good programmers I know don't care about Java at all, and while they may have a generally good understanding of algorithms they don't remember how to factor a number with polland-rho, or dynamic programming, and so forth.
This filter does not tell you nothing about the real ability of the candidate to write good programs, to use the right abstractions, and to design beautiful systems. Actually it may tell you that the candidate is focused on the details, and the brain is a zero-sum game sometimes.
Another thing I'm pretty convinced of is that this kind of candidates is not the only kind of employees you want if you want to enter the social network business and compete with Facebook. Your output as a company will be a system that little resembles what average people want.
So the #1 problem of Google IMHO is to allow more internal divesification of cultures and programming backgrounds.
>about graph theory, it's something that is rarely used, if I need to do something I can grab a book and check it, it's strange to remember this stuff when you don't use it a lot.
I don't think you appreciate how pervasively useful a basic understanding of graph theory can be. For example, Google was founded around the idea of viewing the internet as a graph of hyperlinks (and decomposing its adjacency matrix to rank sites relative to each other). All that wacky data on the internet is easier to deal with if you can think about its graph structure.
Social network? Graph. Language model? Graph. Road network? Graph. etc...
This is a tool for thinking, not some particular API you can look up in a reference. You should be able to reason about the sparsity of the graph (and hence what sort of adjacency representation to use), the degree of the vertices (and thus how badly a search algorithm might perform), etc...
You can understand graphs without remembering a theorem about adjacent graphs. I know a lot of people that would score 100% in all the quiz that don't have a intuitive understanding of algorithm. What you want in a programmer is the latter, not the book remembered verbatim.
You don't have to know Java to get hired, any programming language is fine. You do get asked design questions. I think making the leap from these 5 questions to "This is the #1 problem Google has and why they fail at social networks" is a bit far fetched, but it looks like I'm the only one...
This bit seems wrong (about private open source projects):
> Technically, Google owns everything you write while you work there, even if it's on your own time and with your own equipment.
Google is a California employer, and that's directly counter to California law as I understand it. The only exceptions made are for product areas directly related to your work (not just the employer's business interests). Am I wrong about this?
Googler here (though not a lawyer and I don't speak for the company). I agree that the OP's assessment that you quoted is incorrect in general, for the reasons you mentioned. The Google-internal process that OP is alluding to allows an employee to seek positive assurance from Google that the company will not attempt to claim ownership over the project. It's a way to get agreement up-front that both parties agree the project is unrelated to the business. You don't give up any rights by not going through the process, but you might discover later that Google believes the work is related to the business, and believes that it owns the IP.
However, there is another completely separate process that allows you to open-source your code while having Google retain copyright. It's much easier to get through (turnaround is just a few days) and lets you release the code under a permissive license like BSD or Apache2. I write a lot of open-source code on the side, and I've used this process to get approval for all of it. It makes no difference to me whether Google technically owns the copyright.
I would add that Google's open-source policies are a huge plus in my book. It's a night-and-day difference from Amazon where I used to work. Trying to get anything released as open-source at Amazon was like pulling teeth.
Thanks for putting this up here :-) (I handle both processes) The copyright process usually takes around 6 weeks, but new open source projects can be released in days (and patches much faster)
The open source thing I can understand. The part for non-open-source projects is hard for me as an outsider.
I can't see how any employee can have a web-oriented side project that doesn't compete with some Google service. You have a social network, a blogging platform, photo sharing, video sharing, online music, PAAS, code hosting, etc., etc. What area are you not in?
To this outsider, your question sounds like exactly the reason Google needs this sort of process. Even if an employee were working in an area that was theoretically competitive with something, an official determination by Google should trump theoretical arguments - providing valuable certainty to both sides.
We have a small group from code of conduct/conflicts, open source, android and folks who represent a few other strategic groups at Google and we simply requests one by one. Most of the time it isn't a problem and we grant the request. Getting everyone together regularly can be a bit more difficult, that's why it takes longer.
As a lawyer (IP, not employment), i can affirmatively state you are wrong about this, for reasons various folks mentioned below (the actual phrasing deals with relating to your company's business, not directly related to your work).
Additionally, court interpretations of this statute are not as favorable as engineers like to believe (IMHO).
Living in !California, from what I see, California is only theoretically different from most states. In practice, for technology companies, the law basically means your employer owns stuff on your free time, in the general case.
(IE If you were working for a startup that does online home rentals, and you were making a video game, you may have an argument)
Surely those court interpretations are situations where employees left to pursue products though, right? Is there really case law where a company sued to recover ownership of a free software project? That just seems lose-lose all around. Bad PR, bad karma, no deep pockets to sue, no damages to recover.
Personally, I find this kind of bureaucratic nonsense infuriating. And I guess I thought Google was better about it, but it doesn't seem to be. My past employers have, for the most part, simply shrugged when told about open source work I've done. Honestly I have to think that if I really ran afoul of a rule like this I'd just call your bluff and see if you'd fire me over it (because as I mentioned, we both know you won't sue).
What companies do you know that even offers this type of process? Most companies you have 2 options. Ask and be told "no you can't" or don't ask don't tell and pray you don't get sued.
How is Google having an official policy to get clarity a bad thing here?
Everywhere I've ever worked has been perfectly fine with my open source contributions, and never asked for a copyright assignment (to be fair, nor have they offered one: this is the kind of thing DannyBee finds untidy and unsafe). This madness is an affliction of large companies, and in particular their legal departments. And, given the small sample size I have of "large tech companies", Google isn't much better than the norm here.
Basically, demanding copyright assignment in exchange for "simple appropval process" seems like a poor bargain to me. It's not at all unreasonable to expect that employees be allowed to keep their own IP, and to argue otherwise is IMHO dangerous to open source.
There is plenty of caselaw I can find where, for example, employees have not left, but refused to sign over patents on stuff done in spare time, and companies sued to recover them while the employee was still working there. I have yet to find one where the company lost.
For open source work, if you go through the process, it's short and we're happy to approve it (patches take about 30 minutes, projects, 3-7 days). In the IP release process, it's longer. That is better than every company i have worked at.
You seem to think it's not very important, and just nonsense in general.
Knowing who the legal copyright owners of the source code in your open source project are is quite important (in the cases you posit where everyone is just saying "whatever", the owner is unclear) If you don't think so, I can tell you in the one important case in the US where an open source project had to defend itself, Jacobsen v. Katzer, that Victoria Hall (the lawyer) spent a very large amount of her time trying to get all this sorted after the fact, and it is only by luck that the 50+ contributors she had to wrangle were still around and accessible.
This is probably 300k in legal fees that could have been saved by having done the right thing up front at a cost of about 30 minutes. Note that these legal fees were paid by the poor guy who was running the open source project, not by his contributors.
If you want me to be blunt, no offense, I've met folks like you, who think all this is a waste of time, and you are generally not worth the eventual expense in legal fees to the open source projects you contribute to (IE when the open source project needs to actually defend its rights, or the company behind the OSS project gets sued). Shrugging and saying "whatever" also has a funny way of turning into "that was really mine and guess what, we want damages" when the company you worked for gets bought.
It's great to want the world to be different. I support all efforts to make life better for engineers around owning stuff in their spare time. I also spend a lot of my life trying to help open source in general.
But to be blunt some more: The world we live in right now is not a happy pretty place. The reality is if folks like me, and the companies we work for, tell you than when you release a new open source project we probably own, that you should spend 5 minutes filling out a 4 field form and waiting a day for some folks to click approve before you fly and be free, it's not because we are power tripping bureaucrats. It's because we're trying to save millions in legal fees when it matters, and make sure the open source project you want to release is not going to be in a bad place, at a cost of 5 minutes of time and a small amount of waiting.
So my advice is if you think you really want to "call someone's bluff", I would instead think hard about whether you are really expert enough in this stuff to be able to say it's all nonsense, and stop to think about whether the people who are perpetuating "nonsense" may actually have your best interests at heart. That is, trying to save you from yourself. If you really think it's a great idea to have an awesome open source project used by 1 million people, when it's not clear who actually owns the code, then all i can say is that i pity those people.
You're projecting pretty badly here, and are flaming where I really don't think it's appropriate ("People like you", indeed).
For clarity, though, when you say "the open source process", you mean the process where Google gets assigned ownership. An employee who wants to release something under an OSD compatible license without assigning copyright (for whatever reason, say because they want to use the GPLv3 and you don't, or because they simply don't trust a public company) does not simply fill out a 4-line form and get an answer in a day, as I understand it.
(edit: And you continue to euphemize this. It's not "special treatment", it's a flat out copyright assignment. Ownership isn't completely clear, so one of the parties needs to give up stake. And the process is clearly asymmetric in Google's favor. That's not surprising, really, but what is surprising is that you won't come straight out and say this.)
And my advice to you is that you consider the costs of this sort of thing to the employees and the engineering work when making your conservative legal pronouncements. As we've seen in the news today, even Google's rigorous IP process isn't enough to keep you from getting sued. The world is indeed not a happy pretty place; cavalier avoidance of process may not help things, but neither does paranoia.
1. I'm perfectly comfortable with what I wrote. I neither believe it was inappropriately flaming given what you wrote, nor do I think it is "projecting pretty badly", as you say (The "people like you" was intentional, and given the evidence I have, seems correct. If it turned out not to be correct, i'd happily retract it).
2. Yes, people who want special treatment take time. We've made the common case fast, and the exceptional case possible.
3. We do consider the costs, and try to keep the process as simple as possible while still accomplishing its goals. Note that I am an engineer as well as a lawyer, and contribute to a large number of open source projects and so I have to go through the same process everyone else does.
4. You honestly don't know what you are talking about when it comes to things you've "seen in the news", and i'll just leave it at that.
> There is plenty of caselaw I can find where, for example, employees have not left, but refused to sign over patents on stuff done in spare time, and companies sued to recover them while the employee was still working there. I have yet to find one where the company lost.
Is it possible to sum up briefly why this is? It seems to me that there are three kinds of spare time projects:
1. Spare time projects where the employer has no legitimate moral[1] claim of ownership.
2. Spare time projects where the legitimate moral owner is ambiguous.
3. "Spare time" projects where the legitimate owner is clearly the employer.
If employers only ever bother to sue in cases 2 or 3 I'm not sure there is a problem. Especially if there are internal procedures engineers can use to secure ownership of software in category 2.
Are companies routinely winning in case 1? If so this is a flaw in the legal system and should be fixed.
[1] Based on current generally accepted ideas about ownership. If a developer spends his nights working on a personal project with no connection to the employer, a reasonable person would attribute ownership of his work to the developer, not the employer who pays him during the day.
When you work for Google (who does everything) and you do anything computer related in your free time, I guess that then, Google owns it. In fact, many non-computer related stuff would probably also be owned by Google since they also do a bit of that.
However, these things can have subtleties introduced by the courts, so anybody reading this should talk with a good lawyer before betting a lot on this point.
It looks like that's true, "except for those inventions that ... Relate at the time of conception or reduction to practice of the invention to the employer's business, or actual or demonstrably anticipated research or development of the employer."
With a company like Google it would seem that quite a lot of things might fall into that category.
True enough. But that's still a far cry from "all your codes are belong to us, get in line for the committee". I suspect this is an instance of policy getting ahead of legality. It makes sense for Google to want to review employee open source work (and they could even do things like fire people who don't honor the process). But if it was represented to him that he needed approval to legally release code, then I think someone lied to him.
Let me quote what I wrote in the thread we had on hackernews on this exact subject of spencer leaving yesterday (with the same title, no less :P), then let me explain something additional
First, a quote
"First, this is not the normal open sourcing process. He says "This uncertainty bothered me a lot, since I wasn't sure whether my project could be legally released as open source.". The normal open sourcing process takes about 3-7 days. If he really wanted certainty about releasing it as open source, he could have gone through that process and been done with it.
The process he is talking about is the process of Google granting ownership of various IP rights that google would normally own, to the employee. For various reasons (ethics, patents, copyright, etc) this is more complicated, and takes longer. Google is one of the few large companies that even lets you do this, AFAIK. The humorous part of all this is that the page describing the process, states quite clearly it will take about 2 months to make a decision. So it's not like the 2 month wait was unexpected, either, and phrasing it like he does implies that there was some amount of uncertainty in the time period where he was being strung along, which is simply not the case."
In addition to the above, let me add that at least in the case that took two months, Spencer wasn't asking the committee about open sourcing. He asked "If possible, I would like to own all IP (though Google can use it too, just not patent it) and possibly not release the results of this research to the open source community."
This is the one that took 2 months to decide about. He had said "this is probably not related to what Google is doing", and I explicitly warned him the day he applied that what he wanted to get a release for was in an area google did care about, and was doing research in.
> The humorous part of all this is that the page describing
> the process, states quite clearly it will take about 2
> months to make a decision.
I'm not on the corporate network right now, but if I remember correctly, the page actually says that the wait will be up a month and usually shorter. Two months would definitely be unexpected.
Also, there seems to be something mildly broken somewhere in the process. After I submitted my project, I waited a month with no word before contacting cdibona privately -- he told me that my project had already been approved. A month after that, I received an approval email. So even if the process is fast from your perspective, it may seem very slow from the perspective of the applicant.
So the page now says a month, you are correct and I am now apparently wrong :P.
I have edited to state that if you don't hear from us, to please ping us. It doesn't change the fact that in spencer's case, he had a method for certainty, and it would have taken far less time. Getting approval for special things (what he asked for is something that the committee explicitly, in bold, on the page, says it does not generally do.) does take more time.
It's also not like we meet in some secret star chamber-esque fashion (those meetings are not monthly). He could have checked my calendar to see when the next meeting was :).
Your case is a bit weird, from what I see. Your project was approved, but the notification date is wrong.
I imagine cdibona broke the script that sends out emails, and didn't notice/run it again until the next meeting, so you ended up with a month late notification. As I said, I edited the page to make it clear folks should ping us instead of wallowing in silence.
What's so fascinating about this is that it provides a glimpse of the inner workings of the Google machine. Even Google with its second-to-none reputation, cool tech and HR practices ends up alienating this employee because of its bureaucracy. This in a company founded just fourteen years ago. On one hand it's comforting to know that even the best of the best lose an important aspect of their competitiveness and might be defeated, on the other hand it is troubling how all companies seem destined to lose their agility and flexibility. Not even Google with all their incredible resources and brilliant minds can solve this, it seems.
It sounds like you're reading an awful lot into one person's experience. Lots of people at google love their jobs, feel challenged and fulfilled, see all the oppertunities google provides from touching the lives of millions to open source projects to 20% time to access to every project at the company etc etc etc as an amazing blessing.
Others have either bad experience, a bad attitude or maybe a little of both.
I don't think one guy's opinion is really a reflection of most employee's experiences at google.
I've been here 4 years. Haven't written one line of Java. Get paid to contribute to open source, have started 5 to 20 side projects, all open sourced. I could go into a lot more things but suffice it to say Google is by far the best place I've worked out of the 10 places I've worked so far.
Java was viewed as being "good enough"; alternatives like Scala and Clojure were not considered.
If a company enter the what-is-the-hottest-language-of-the-week game it can easily drag itself in a downward spiral of death. Just because it boils down to taste! And it leaves a lot of hurted feelings around the way. Language is a lot about maintenance 10 years down the road.
Scala and Clojure each have significant design flaws, in my opinion, and neither would have been a significantly better choice.
Couldn't agree more! Be it a startup or a mega-corp, VPs should make their minds upfront and stay on their path until something amazingly better comes down the road. C'mon, if Google had chosen Scala, for example, they could have produced clean, concise code, FP-oriented software at the price of sluggish compilation times (many wasted minutes), lots and lots and lots of generated bytecodes, and binary incompatibility (ouch!). Jump into the trendy language badwagon and you find yourself nowhere pretty soon. I know of at least two prominent startups in Bay Area who are switching of Scala and adopting the old-fashioned Java, for many reasons, but fondness of OO or lack of vision are not among them. And a third startup is stealthily switching to Clojure. On the other hand, you have C/C++.
A big organization not wanting projects written in Clojure, Scala, or other language of the month? Yeah I wouldn't either. If 99% of your coders know Java, and there's no compelling reason otherwise, go with Java.
> Pathological love for Java and anything resembling Java.
And I think that's the good thing. Over years I finally realized, it's not the programming language you use that matters, or makes you look smarter. It's the kind of problem you are trying to solve by writing code. Computer Science field is far, far reacher than the PL research subfield.
I mean, you may be writing another boring enterprise web application in Haskell, or solving Artificial Intelligence problem in Visual Basic. I would prefer later rather than former, although I hate VB. I know, both situations are contrived, this is just a thought experiment to illustrate my point.
Don't get me wrong, I love "esoteric" programming languages. Few years ago I spent quite a lot of time playing with Haskell, Prolog Lisp, etc, and I don't regret it. No, I don't use these on daily basis or going to, but I studied a hell lot of coll new stuff. Most importantly, it taught me about new _paradigms_ of programming, that I really think every programmer should understand.
These days, however, I try to squeeze as much math and algorithms from different domains as I can in my poor stupid head and I think that payoff of this would be much bigger for me.
PS
Few month ago at a local functional programming meetup some guys presented their Scala solution to a trivial problem of validating web forms in a rather trivial web application. Their solution employed the whole lot of functional stuff, like functors, mappings and the things whose names I forgot. I was trying to understand what they are doing but they lost me after fifth minute of the presentation. It took them maybe a week to write all this code. Do you see the irony?
Most Google projects are not written in GWT for the frontend, they are written in Javascript using Closure and the Closure compiler, that includes Gmail, Search, G+, Docs, Drive, etc.
Really, it sounds like this guy got put onto a team project he didn't like, but Google has very large codebases in C/C++, as well as JS, and there is ample opportunity to work on non-Java stuff. Chrome and Android (non-userland) in particular.
Any company with a team of people is going to have to place limits on polyglotism and have standards for readability and coding. I like Scala too, but the idea that a company is going to allow anyone to just pick any PLT research languages that they personally like sounds far-fetched.
The trend of "all OOP" especially in the Java world, and how OOP is done, is BAD. As in "there's something in my eye" bad.This reflects exactly on this: "Reviews preferred local simplicity over global simplicity; abstraction was discouraged"
I feel like I'm a million years old, but I think procedural programming does it right.
It seems most OOP programmers want an excuse to do 10 levels of inheritance and split the functionality in weird ways.
The rest seems likely to derive from google's broken hiring process.
Nah, I've been around long enough to know that local simplicity ruled the day just as much if not more in pre-OOP days.
OOP is great because it allows programmers and architects to create reliable and clearly defined types as data building blocks. But sometimes the hard parts of the problem are not in the structure of runtime data. Classic imperative and structured programming focused on the format of input data as the driver for code architecture, and that is the right way to design processes. Performance-oriented code needs to pay attention to data access more than to data structure or even to operations.
Doing only OOP takes your attention away from those issues, and thus limits your design. Also, OOP focuses your attention on abstraction, and with a large enough problem, lack of visibility and clarity of requirements lead you to abstraction for abstraction's sake.
Your complaint is the opposite of what he was complaining about. He wanted to be allowed to split functionality in weird ways, abstract more and employ meta-programming, which is way less procedural than OO.
Even as a relatively young† programmer, I’m with you to a certain extent. OOP‡ is both technically and philosophically unsound.
However, though I use C quite a bit, I don’t think procedural programming is the paradigm to end all paradigms. My knowledge, productivity, and code quality have all benefited from functional programming (Haskell and Scheme) and generic programming (C++).
Also, splitting up functionality is a part of good abstraction, and “weird ways” might only appear weird because you’re unfamiliar with them. That’s how I felt for a while about monads, for example. Some of it is just having the experience to see what qualities make code good in a particular paradigm. And regardless of paradigm, good abstractions are essentially about factoring and reducing repetition (DRY).
†though not terribly new
‡the “classes and inheritance” kind that Java epitomises
Philosophically? Can you please elaborate on this? Technically, I don't think there is any unsoundness in OOPs as compared to functional programming (or procedural or generic). Technically, there are paradigms available for all requirements. It might end up being messy or boring or bloated, but that is not a technical issue but more human. OO code on the other hand is incredibly easy to grasp and the ubiquity of OOP fuels itself. May be we would have been at a higher level in our enterprise codes if we had chosen a superior language like Haskell, Scheme or Python for everything; but we don't it for sure. Code is as much an art as it is science, and sometimes better art is possible with outdated paradigms likes paint and paper than touch interfaces and brilliant applications.
As mappu mentioned, I was inadvertently paraphrasing Alexander Stepanov, who said it better than I can:
“I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting—saying that everything is an object is saying nothing at all.”
Most of my work has been in game dev and languages, so I may be biased, but the problems I encounter tend to be about 90% dataflow, constraint satisfaction, events, and logic programming, and 10% sequential/imperative code to string it all together. OOP (again, in the Java sense) can model the non-sequential parts, but it’s just sort of awkward and unpleasant.
“I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras—families of interfaces that span multiple types.”
“I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms—you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.”
You can’t come up with an interface until you know that it actually corresponds to anything. The defining feature of OOP is also its greatest weakness: that it bakes interfaces into datatypes. Modularity is crippled and change is prohibitively difficult, resulting in extensive workarounds (and a proliferation of design patterns).
But ultimately, programming is about what you create with it. We can argue about languages and paradigms all day, or we can just make cool stuff however works best for us.
I believe the GP is quoting[1] from Alexander Stepanov (author of the C++ STL).
Saying "pick the right paradigm for the task" is of course the correct answer, but it's fun to argue sometimes, and OOP is definitely overhyped in certain markets (for instance, university CS)
Yes, I was (somewhat accidentally) paraphrasing Stepanov. It’s been a while since I read that.
But actually, I think “pick the right paradigm for the task” needs the qualification of “…within your other constraints”. The most expressive language for a job might not work in your situation—whether for reasons of efficiency, of maintainability (i.e., finding maintenance programmers), or of interoperability with the rest of your stack.
I'm surprised by the tech part. I was under the impression Googlers used a lot of Python, for instance, and possibly other languages... Are these exceptional cases, then?
Google's a bit ambivalent about Python. On one hand, you have a contingent of programmers who don't consider python a "real" programming language. These types seem to be mostly on the search team. On the other hand, there are plenty of pockets within the company where Python is used pretty heavily. The most notable is arguably the App Engine team.
I found this out during my interviews at Google; after I solved a problem using Python the interviewer gave me another and said "this time use a real language." Ouch.
That's the sort of thing that would make me strongly consider ending the interview, at least if I was going to be working directly for the interviewer. No need to waste more of their or my time than necessary, and I really wouldn't like working in that kind of insulting environment. And from a probably-immature perspective, the feel of power from being the guy saying "sorry, I'm not interested anymore" instead of hoping they call you back is nice.
Or, if I was really annoyed: bust out some Common Lisp for the next problem, then leave.
You should definitely mention things like this to your recruiter. Sometimes interviewers misunderstand or make a mistake, and that shouldn't be counted against you.
Obviously it's rational for this event to make you uninterested in working for Google, but if that's not the case, it helps to escalate your concerns or complaints.
That's pretty bad. Interviewing should also be about selling the candidate on the company. If they get an offer you want them to accept. If they don't then you don't want them bad mouthing you to other potential candidates.
If an interviewer insulted me or my language choice in an interview it would definitely make me wonder how they would treat me as a coworker.
The entire Google interview process is slanted that way. They keep making you feel privileged for being able to take that interview, and almost never try to make you sell the company to you.
This has changed a lot for the better in recent years.
Also, a lot of geniuses are socially inept. I doubt you said everything right in your interview either. It helps to smile and keep going when one person in ten shows a flaw.
When I get stuff like that, I'm generally like "oh wait I'll try a real company". Not that C is not lower level for example. I love C. But since the interviewers are actual devs and managers at Google, it shows quite a bit.
I'm guessing it'd be nice starting writing in a lesser known language or even just go for some old arch object code (like 68k object code which is pretty simple to write, and the chance is high that the interviewer will be clueless and find this a bad joke - and yes i can write that as i did a lot of it 15years ago, its straight forward)
Then Google writes you as an "do not employ" person of course and say that you do not fit (while you declined them), but hey, honesty, balls, and being dumb sometimes feel good. Sounds like a true story.. oh, I know.
That's a lame response from the interviewer, but you should probably have asked him what language you should use before doing it. If he did not say, then his question is underspecified (maybe to test you).
When I interview someone, I explicitly prefix coding questions with "you can just write pseudo-code or whatever you prefer" or "please use C/C++".
This is good advice. I believe I did ask and get the okay for using Python on this particular problem, which made the reaction a bit more surprising to me.
The next question had to do with string manipulation and I think he was just worried it would've been terribly easy if done in Python.
While Python is a sizeable part of App Engine (SDK, admin console, tooling .. some other things I can't remember right now), the majority of the code is Java and C++. It's one of the reasons Python isn't listed as one of the top needs (http://googleappengine.blogspot.com/2011/07/wanted-app-engin...) over C++ and Java.
That being said, it's a very Python friendly team. Guido's on it (HI GUIDO).
Every time here at Google that I've started a project in Python, I've finished it in C++ because Python had neither the speed I needed nor the maintainability I wanted. Every. single. time.
Static typing is a godsend when you're working with code that you only vaguely remember from the last time you worked with it a year ago. It also gives me much more confidence when refactoring: the compiler will find the callsites I missed.
I've worked on rather big projects in Ruby and Javascript, and testing always managed to keep things sane. Static typing helps you refactor things or make sure certain variables are the correct type, but in the grand scheme of things, I'm not sure type casting is the major source of bugs/errors in any system
I do like Python but haven't used it for anything more than a few thousand lines.
The dynamic nature puts many errors into the runtime stage and increases verification complexity. Can be limited by static code checking, but grows cumbersome.
python's loose typing can cause some problems when new people start contributing to a sizeable project, that can be caught earlier in languages with stronger/earlier type checking.
When I was there is was more by 'layer' so if you were in the customer facing side you were pretty much talking Java or C++ but if you were in operations it could be Python, C, or bash scripts even.
The part of the codebase I work on in maps is written in C++, making it the language used most often within my team. I've finished a number of projects in python though, and I don't get any impression that I'm required to program in these languages should I think another is more appropriate.
This is the best "Why I Left Google" article I've read, and I really don't have any criticism for it. Everything he says is true on some level, but things that look bad don't look as bad with some context. So I hope to provide that in case you are thinking of working for Google but are pushed in the "nah, maybe not" direction because of articles like these.
First off, you have to remember that Google is a big company. It's one of the world's most profitable, and there are quite a few employees. This is not a startup. This is not a world free of politics. This is not a world where the execs will listen to all of your concerns. (With that in mind, it is very easy to get their ear and voice your concerns. It's just that they might not drop everything to do things the way you think they should be done.) This is a publicly traded company with lots of stuff to do and processes that scale to a large number of employees.
But, that's not a horrible thing. If you've worked at other big companies (or bigger companies), this place is like a dream world. You know what other people are doing. You can see their code. You can see their processes running in production. You can have internal mailing lists where you might accidentally say something that looks bad during discovery. Every change you make to the codebase gets another developer saying, "yes, this looks good" before you commit. Everywhere I've worked before, anything like these would have been immediately shut down. Code reviews slow progress. Mailing lists are a legal minefield. Sharing code doesn't allow the correct cost center to pay for its development. That's how it is. If you are unhappy with Google's corporate policies, try working for Bank of America. You will not complain as much :)
On the other hand, if you are the "I want to work by myself and be a product manager, engineer, sysadmin, and CEO", then the big-company culture is not for you. I write code and talk about writing code. That's it, someone else does everything else. I like that, but if you don't, you won't like the big company lifestyle.
(So, with that in mind, I don't think Google is a good place for your first job out of school. You will be annoyed with the M&M restocking policies, quit out of protest, and realize how horrible the real world is. IMHO.)
Now on to some specific points:
Programming-related:
Pathological love for Java and anything resembling Java.
This is a network effect. In any project I've started, I've been able to use any language I've wanted to. Plenty of projects are using Haskell, my personal favorite app programming language. The reason why I choose Python or Java at Google is because I want a large pool of people to get feedback from on code reviews, design reviews, at "could you take a look at this" time, and so on. Everyone knows Java, and so I can be more productive at Google if I use Java. I don't spend much time typing in code. I do spend a lot of time working with other people. I think Java is objectively a terrible programming language and I hate it conceptually, but the great internal community and great internal libraries make it quite usable. (Read my HN history. I am not kind about Java. But at Google, the limitations I complain about are not as relevant as they are when you are "out on your own". I don't foresee any personal projects in Java any time soon, however. It's too verbose for apps that don't need to be fast or scale. And it's not really fun like Perl is.)
If you absolutely cannot work with Java, then there are plenty of C++ projects to work on and your skills will be valued highly. Go is probably quite acceptable, too, given the right project.
Most engineers were not comfortable with {functional, concatenative, combinatory, logic, meta} programming.
Even though it's Google, programmers are at various different levels in their programming careers. I personally like programming languages and their differences in style, but other people are more practical and just want to sit down and churn out features for their project. They don't want to switch to Haskell because it's better at X than Java. They just want to make something new. So, if you want to be different, you need to be prepared to go out on your own. And if you're new to Google, you might not be ready to do that for two or three years, because there is so much other stuff to do. You're going to be thrown on a team and expected to contribute. Establish rapport and then try to change the world. Don't do it on your first day. (Again, if that's a problem, big company life might not be for you. This is a team effort, after all.)
Reviews preferred local simplicity over global simplicity; abstraction was discouraged.
The first half is true, simply because reviews are line-based rather than repository-based. But there is a design review stage of projects when you discuss high-level design. I was not used to writing design docs before writing code before working at Google, but now that I'm used to it, I like it a lot. It's saved me a lot of time, I think.
Code reviews depend on your teams' conventions. My team is super strict and nit-picky, but others will almost rubber-stamp changelists. Different personalities, different results. If you want harsher code reviews, review other code more harshly, I would say.
Abstraction is not discouraged.
Productivity was graded without much regard to the amount of technological debt accrued. (Though to be fair, this is a hard problem.)
I don't know much about this. I work with lots of people that are at the level I would like to be at in a few years, and I don't foresee any problems for myself. My bosses give me the impression that I choose good objectives and key results, and I pretty much meet all my key results. But it's only my second quarter at Google, so who knows.
As for the corporate culture points, I basically agree with everything he says. I don't think Google+ is the finest Google product the world has ever seen. That's my opinion and since I don't work on the project, nobody really cares what I think. I can live with that and even think it's reasonable. But if you want to associate everything a company of 30,000 does with your personal thoughts and beliefs, you might be disappointed.
I know I sound like a total pushover when I say "that's just how the world is and you should adapt yourself", but honestly, I just want to receive large quantities of money in exchange for playing with computers, and Google gives that to me. And really awesome food. And wonderful coworkers. So I can't complain much, especially after the other places I've worked.
YMMV. Why not try Google out for a year or two and see for yourself? You might like it, you might not, but you'll definitely learn something.
(And if you want to apply but don't have a contact at Google, I'm happy to be one. Tell me about yourself and I will try to get you in touch with the right people. jrockway AT google.com :)
>>If you are unhappy with Google's corporate policies, try working for Bank of America. You will not complain as much :)
Comparative 'heavens' only work for so long. I don't work for Google. But I can tell you- 'Try working for <something that sucks more>, to know how <something that sucks less> is better', arguments often lose out with time.
No matter where you work at, you gradually experience 'Law of diminishing utility'. Because when you spend time at large corporates or any activity where big teams are involved or anything big, you get subjected to human side of things. Averaging, policies, policing and everything required to bring sanity will bug you.
Coming to Java and other technology stuff. I've realized that technical excellence, solving problems et al are not the goals of any manager. Even if that is a manager at Google. Most managers are just bothered about keeping their floor running, they just want a language like Java which has endless supply of people in the market, even if 90% of them are of mediocre quality. Managers just worry about maintaining the status quo until a paradigm change will force to them to change.
Leaders do great stuff, Not managers. Not even product managers, not your senior managers, not your VPs. None of them do great stuff. All they do is keep affairs running without chaos. And maintain the status quo which is x% growth of business with some metrics of quality, employee satisfaction and other everyday company metrics. This is all managers can ever do. No matter which company you will ever join.
So as long as you are an ordinary soldier under an ordinary manager. No matter which company you will ever work at, you will get frustrated with time.
This comment is a gem and I may steal some things from it for a blog post one day.
My favorite part:
On the other hand, if you are the "I want to work by myself and be a product manager, engineer, sysadmin, and CEO", then the big-company culture is not for you. I write code and talk about writing code. That's it, someone else does everything else. I like that, but if you don't, you won't like the big company lifestyle.
I'm completely opposite, I love doing all of those things.
I have a lot of respect for people on both side because there is so much to do it takes all of us.
But I never expressed the difference as succinctly, thanks.
Over all, your analysis does make Google seem like a nice place.
But (and this may sound a little condescending), I can't help thinking it's a nice place to go when I'm less young and crazy. (For reference, I'm two years away from graduating, so I'm not even in the "real world" yet :).)
I think I would be much better off starting with a tiny startup. Partly because I have big ideas and don't like rules much, partly because I share the same biases as the blog author and partly for practical reasons (my expenses after graduating will be as low as they ever will, after all).
But starting off at Google? I don't think so. It seems less fun and exciting than a startup, and I'm worried about becoming dependent on big company resources.
Maybe I'm an idealist, but I want an adventure straight out of college, and I can't help thinking that Google (even if it is much better than Bank of America) is much less of an adventure than I could have elsewhere.
I'm really glad you wrote this, and I'm now linking to it from my post. Definitely agree that you should try Google for yourself. It didn't work out long-term for me, but hopefully it's obvious that that isn't saying a lot.
But Google does have a strong C++/Java/Python bias. Even if it hasn't touched you personally, it is very likely that a new Googler will be using one of those 3 languages as her main language.
Most large companies (and even mid-sized ones) have some kind of "technological environment of choice" that sort of given (three languages is already not so bad). It does not means it cannot change or be lifted for a specific project, just that it is "what you are supposed to use", and that you are expected to provide heavy arguments to be allowed deviate from it.
In most situations, this is actually wise : behind development, you have a whole process (deploy, quality control, etc), and they will have the required tools and expertise for the languages they use on a regular basis. And when you start a project at a large company, you know that other people will take it after, so it is easier if it is close to the company technical standards.
Having written an application in Ruby in a Java shop, I remember protesting on the "rewrite in Java" that happened just after the prototype phase. With some distance, I think I understand the decision.
There's nothing wrong with having a uniform set of languages. I just wish they were good languages and not Java or Python.
Having everybody use their own programming language would be crazy. Unless they're domain-specific languages, in which case it would actually be very cool.
Still, if I'm to be bound by a language choice, I'd really like to be the one making it. Which is why I think working for a tiny startup would be more fun.
It is important to know your own preferences, but as you said yourself, then you need to make your choices accordingly : it could be the startup, or aiming for a position in a big company where you have something to say about those choices. Both are interesting.
For the first point, I will not start a discussion about what is a "good" language (a large part of it being in the eye of the beholder and in the requirements of the projects).
I don't disagree with you on any count, but I think, to a degree, there are going to be relatively standard languages at any large company. The majority of Microsoft programmers are probably out coding in some variation of C#/.NET and the like, while Apple programmers are somewhat locked into Obj-C etc.
When you reach the kind of scale these companies have attained (30000 people at Google!) you need some sort of internal commonality in language knowledge to allow effective code reviews and intercompatibility between divisions. Java/Python/C++ each have their own flaws, god knows Java has some flaws, but I believe allowing free spirited use of whatever coding language best fit the project would lead to a net loss in productivity (in review, lack of reusability, etc.), even if it was a better approach from a theoretical perspective.
Businesses don't like lost productivity.
Google still uses Java for the same reason we all still speak English instead of Esperanto. An existing community of fluent speakers is far more valuable in daily practice than theoretical superiority.
Erm, isn't the Play framework for making web apps? I don't see how it could make Java in general bearable, especially not if you're looking for functional features and way less boilerplate.
I like Guava, which at least lets me write the functional version before I try to figure out how to write idiomatic Java. Being able to do fp is a nice "fallback" option, compared to the always having to think imperatively. (for loops for mapping and filtering? Gag me with a spoon.)
As someone who worked there 4 months last year, this is pretty accurate. I had a more negative experience because I got consigned to an obvious tarbaby team that was bleeding an engineer a month, but this seems more like the norm...
And yes, Google makes claim to everything you write while you're there unless they explicitly agree otherwise. They have set up a committee for making those decisions. This is probably a good thing IMO.
My misadventure with Google seems to have occurred because I was a mensch to my former employer and stayed a couple months longer to finish my project after Google recruited me out of the blue like gangbusters.
During this time several really intriguing projects in Google expressed interest in me but immediately lost said interest when I told them how long it would be between their phone call and my arrival at Google.
So when I got there, after having quit my former employer, I arrived unallocated and got thrown into the blind allocation pool. There were seemingly 3 choices for me but 2 of them were effectively withdrawn for different reasons almost immediately (and it was pretty bizarre the way this played out IMO). This left me with one choice - to work on a team that made zero use of any of the skills that attracted Google to me in the first place (and I got the standard silly speech about Google wanting generalists). But having quit my job for this I accepted and decided to dive in and give it my best. It didn't work out in a big way - it was a crappy project and I was blocked from transferring out and even taken aside and scolded for discretely talking to other teams.
Fortunately, I've been at enough dysfunctional startups to have become adept at the diving catch (I just never in a million years thought something like this would happen at Google). So after 4 months of this nonsense, and talking to some friends who had gone through similar experiences there in the past wherein it took them 2-3 years to climb out of the crater it made in their career, I left for greener pastures, taking a very slight cut in pay to go back to doing what I do best (which is insane because IMO Google really needs people who do what I do well but I digress).
If you don't know what a word means, the first step is to check the dictionary! "tar baby: a difficult problem that is only aggravated by attempts to solve it".
I didn't know what varelse meant by tarbaby. Try googling "tarbaby team" and seeing what comes up.
That being said, I'm clearly in the wrong, sorry for having been a dick. I know the phrase tarbaby, I'd just never heard of a "tarbaby team" before and was thrown off.
I gather he meant the colloquial meaning in most places of the world which is something to the tune of "Something best avoided rather than confronted.". I believe it originates from a story or fable as such.
However I assume you meant he was using it in the more offensive slang usage for african-american. Chances are he might not even know it has the connotation as it's pretty specific to America.
I'm an American who grew up in the deep South. I've been exposed to just about every racially derogatory term you can think of, in every direction, and I've never heard "tarbaby" used as anything but "a trap to avoid."
I'm all for keeping the discourse civil, but the term was clearly used in a nonracial fashion. Please don't destroy perfectly useful words in an attempt to act offended on someone else's behalf.
Nobody uses "tar baby" as a slur in America. It's like "niggardly" - ignorant people use an incorrect interpretation to try to shut down people they don't like.
That said, you'd have to be crazy to use "niggardly" now after the reaction it got whoever that was. If you use words that most of your audience doesn't know, what does it say about you? :)
"Pathological love for Java and anything resembling Java."
"Java was viewed as being "good enough"; alternatives like Scala and Clojure were not considered."
Perhaps Google should have acquired Sun, and worked on updating Java to keep up with other C derivatives (e.g. C#).
What's that essay PG wrote on starting Viaweb with RM where he said potential competitors advertising for Java and C++ developers did not make them nervous?
Spencer's comments are generally accurate. I might quibble with some of them or say that they vary from team to team. For instance, I have used R, Python, Java, and Sawzall in my work at Google and have had freedom to choose technologies on some of my projects with input from team members.
It is true that most of the larger production systems like the Search Engine are mainly written in C++ and Java and that functional programming isn't widespread, yet.
But you are free to take on a 20% project to show the value of other ideas.
I would also challenge technology cons #2, #10, and #11. I think a lot of Google Engineers do tackle fundamental problems.
When you said it, if you did not believe in what you said, you better have not said it. The world is filled with enough empty smalltalks. Moreover, if even you do not believe in what you say, who the hell would believe in you ("read you") next time you say something?
If you thought it was right but have changed your opinion in between, please state it clearly. It is a strong mark of cleverness (to me).
If you still think it is right, please stand for your opinion, it may have some value for others.
[] Just finished a nice podcast about Wittgenstein...
Hang on, you're being a little unfair about this. I believe in what I said. I think FP is a great step forwards and I think it's going to win out. I'm biased because I think this.
But Google is worth billions of dollars and is succeeding wildly at innovating and changing the world, and I'm a lone developer. I can't very well stand here and tell Google, "hey you billionaire company, you're doing it all wrong; I've got the magic bullet right here."
Part of the dichotomy is whether to define success as financial wealth or as some kind of more abstract software quality. And I think that's a legitimate variable; if Google is wealthy but has lower-quality software for it, then they and I just have different priorities.
I stand for my opinion by investing all of my spare time into FP language development and research, but until I see a billion-dollar company using it, I'm not going to call anyone a loser for choosing Java.
What's sad is that when this get publicity the author has to shield up all the things he said, which were probably honest; in order to avoid troubles/ future employment issues.
Employment is always so taboo. Not so much freespeech if you don't wanna live with the "consequences".
That's not why I did it. I felt bad because my family and friends know me as being unreasonable, idealistic, etc. But HN doesn't know anything about me and might assume I'm an otherwise reasonable person. So I wanted to make sure I wasn't badmouthing Google here. They really are an awesome company and the point was never to make them look bad. More just to explain about where the incompatibility was.
"High turnaround time for my own open-source projects. [5] ... two-month lag before I got an official reply from them .."
What I would give for two months! At my current employeer the turnaround time has been up to six+ month which has killed a number of my personal projects.
FWIW, Chris Dibona's record response time to a request I've made to open source a project was somewhere on the order of seconds. Maybe 10 seconds I think. It was like he had his finger hovering over the "Approve" button. Patent approval takes longer and does seem to push things into the 3-7 day range, but I once didn't get a response from them, and Chris basically just said, "Well, that's their problem. They were supposed to respond and they didn't, so you can go ahead and release." The open source team at Google is amazing and frankly, more companies should copy them. This stuff matters.
Passing the torch to the next generation of tourist cum expert. You can't expect these folks to stick with their avocatiom more than a few months before quitting.
I would like to hear your thoughts on Clojure's serious design flaws. I'm currently learning the language and haven't come across anything warning me of such things.
I have two years experience with Clojure and about a year working with Scala (plus use of Ocaml and a bit of Haskell). Clojure and Scala both have warts. Many, many warts, as all languages do. Still, I can't say that I've found major, intractable "design flaws" (as in, design decisions leaving me convinced I could do it better) in either. They run on the JVM, and they're ambitious languages, so warts are inevitable. That's hella hard, and both languages do an excellent job considering the constraints involved.
Most of the things I find myself disliking in these languages have more to do with JVM legacy (e.g. type erasure in Scala) but that's also what gave them a chance at being mainstream, so no complaints there.
It's worth noting that Reifeid generics also have their limitations. I believe Haskell also does a similar level of erasure. Reified generics makes it harder to include higher order types and so the level of abstraction in Scala's collections or monad transformers would be much harder without type erasure. Haskell's type system is better at keeping the annoyance of erased types away by not just casting every thing to a top type.
The only Con that really made me nervous was 15 and even without context it doesn't say a lot. Boilerplate isn't always bad though it usually hints at things that could be better and more reliably solved.
Funny, he complains about over-usage of OOP but off-handedly dismisses Go. Not sure I understand that.
offtopic nitpick: I have a really nice widescreen monitor, and you are giving me 13 words per line using about a quarter of the horizontal space. Why is that?
That is the sort of thinking that gave us the X Windows configuration file.
Nobody wants to make that call for most things in their life. Nobody has time. There is definitely a minority of people who do want to carefully size each browser window for optimal reading, but it is a pretty small minority.
As for the "pathological love of Java", Spencer has a distorted view. This will vary from project to project. I'd say C++ is more pervasive although Java may well dominate applications (rather than infrastructure) code.
Also there are lots of people who use Python but ultimately its production use is limited.
A common theme here seems to be that as an engineer you are discouraged or disallowed for doing things that are too "clever". This is something I approve of. Other people need to maintain your code after all.
For example, the abusage of automatic semicolon insertion that the Twitter devs behind Bootstrap seem to love (as some kind of "see how smart I am" display) would never survive at Google (our style guide expressly forbids excluding semicolons).
Anyway, sorry it didn't work out. Good luck, Spencer.