Hacker News new | past | comments | ask | show | jobs | submit login
Learn more programming languages, even if you won't use them (thorstenball.com)
602 points by ingve on April 13, 2019 | hide | past | favorite | 304 comments



Learn more ? I'm actively trying to unlearn a bunch of languages at this point!

ofcourse, when I was a wageslave in the bay area, its nice to know python, java, javascript, scala etc. - got me jobs every 2-3 years & put food on table.

now that i'm in academia, its completely upside down. literally everybody is way more productive than me in just about any task. The other day I as supposed to program a poisson clock, it took forever. Meanwhile my advisors & classmates with zero industry experience chug thru these tasks effortlessly. Lately I've confirmed its because they know exactly 1 language , but they know it so well & in so much depth, they know exactly where to look, the right mcmc library, the right slice sampler, the right optimizer, that's what matters.

There's no point programming the same bloody front-end & back-end in a dozen different languages just because of industry constraints. That's like eating a spaghetti with a fork, a spoon, with chopsticks, with your fingers...its the same fucking spaghetti. cook something else, not just rotate dish-ware.


It's perfectly possible to know several languages in-depth. It just takes a lot of time and effort. Moreover, it gets easier with time and practice. Knowing N languages makes learning N+1 language just a bit easier. For me the turning point was around N == 10, since when learning a new language - and yes, in-depth, including idioms, stdlib, some external libraries, maybe some framework (if needed), and also some facts about the implementation and its inner workings[1] - takes a week at most, and for simpler cases a weekend is enough.

OTOH, it took around 15 years to get there. I understand that this kind of dedication may not be possible or worth it for everyone.

[1] Because there is very, very little unique features to each language. Unless you go to the fringes, you're bound to see the same concepts applied again and again and again, which gets really disheartening after a while. If you don't believe me, try naming any "new" feature which was added to your favorite (EDIT: I meant mainstream - TIOBE top 10 or 20 - lang here!) language recently - and I'll show you that same feature implemented 10 or 20 years ago in another language(s).

EDIT: more neutral wording.


I don't doubt it's possible.

What I do doubt, is that it's possible to know several languages in-depth and at the same time be at least very good in a specific domain (webdev & mobile apps are not domains), have good algo and problem-solving skills, have decent to good soft skills, know at least a little bit of PM and also be a good conversation partner and well informed citizen of the world, while also being a good husband/wife/dad and being reasonably competent at some hobby that is hopefully not related to computers.

Learning and keeping up to date with the N+1th language takes space from learning the many other interesting things this world has to offer.


Yeah, it's a matter of personal choice.

My choice was to forgo any hobbies unless they are somehow useful for programming, ignore most of being "well-informed citizen of the world", ignore the whole mating and breeding business and to focus solely on programming at first, then on programming-related fields that picked up my curiousity. I'm still competent enough in other programming- and work-related areas - at least I've never been told otherwise - but, indeed, half of what you write about is nonexistent in my life.

Actually, if there was a monastery for programmers, with ascetic lifestyle and good broadband connection to the Internet, I'd go there in a heartbeat.

OTOH, I don't think you'd need to go to such lengths normally. As a programmer, you'll be working for 40 years at least, right? 30 minutes of reading a day will eventually get you to the respectable N (if you choose this particular field), it'll just take longer.

The OP said the fact that he learned multiple languages makes him fall behind his peers in coding tasks. I'm saying that he would have no such problems had he learned these languages in-depth and it's perfectly possible to do so. I admitted immediately that the effort needed for this is at least substantial, no disagreement here - I just pointed out that, after that initial effort of consistently learning N languages for 2*N years, the following languages become very easy to pick up. That's it :)


Even then, I don’t think you can ever really learn even one mainstream language in depth. For something as ‘simple’ a C every compiler is slightly different and includes different flags and run on different CPU generations and this stuff is constantly evolving. A language like Java has a giant and evolving standard library plus multiple compilers etc. And that’s ignoring any simply popular libraries etc.

Really learning what you can safely ignore in whatever context is the most important part, everything past the basics is domain specific.

Having said that, I have read several language specifications which can be useful for general understanding. But, that’s simply the tip of the iceberg.


You're moving goalposts a bit here, I think. Or maybe it's my fault, I should have defined 'in-depth' more precisely.

Anyway, I specifically mentioned knowing "some facts about the implementation" instead of "knowing the implementation inside-out". The same is true for tools, libraries, and frameworks - there is no need to remember every function/method signature and every class name in a library - as long as you know the most important parts and can quickly and accurately find the relevant documentation for the rest.

To me, 'knowing in depth' (as a language user) means that no matter the question, you know where to search for the answers. There's no need to remember the answers themselves, although it kind of happens naturally with repetition anyway.

On the other hand, it's also important to know which questions are not worth answering. It's exactly as you say:

> Really learning what you can safely ignore in whatever context is the most important part

So, to sum it up: if you can do both these things, then, to me, you have in-depth knowledge about the language (or actually any other field). The next step - ie. actually knowing all the answers by heart - is mostly useless for language users and only matters for (and is best left to) language implementers/advocates/nerds.

> A language like Java has a giant and evolving standard library plus multiple compilers etc. And that’s ignoring any simply popular libraries etc.

Java stdlib is only "huge" in comparison with C, where stdlib is non-existent. The popular libraries may be ported from other languages or even directly called via FFI. And if the library is really good, it will be ported to other languages, too, which means you'll have an easier time using these languages in the future.

Even without all that, though, if you know more-or-less what's where and can quickly search for the details - you're good. It's not that easy to learn this - you need to remember the structure of the docs at least - but it's much easier than actually learning and remembering all of the stdlib, etc.


Languages change. In depth knowledge comes with an ongoing cost.

For this reason I think it may be rational to only learn what will be used in the near future and spend the saved time on more timeless knowledge.


> on more timeless knowledge

Like interpreter and compiler architectures, type systems kinds and their meaning, ways of specifying the semantics formally, and also the structure of novel kinds of abstractions proposed by researchers in some experimental languages.

If you learn all of that today, the next time you'd need to learn anything new about a mainstream PL would be in 10 years, if not twice that.

Example: many modern languages only recently started supporting some of the Algol-68 features. That's 50 years to get a lambda into the language...


I manage to do it, though I've been studying for 20 years. Helps that I don't watch sports.


Define know though. A C++/Java programmer can quickly write some Python code but isn't Pythonic. To me being able to write a program in a language doesn't mean you can write some code in it, you also have to know the paradigm its supposed to be used, as well as the layout styles, build tools, commonly used libraries etc etc. Even learning a new library often takes months alone.


Yes! Exactly. Oftentimes I look for "X for Y developers" style tutorials to try and fasttrack that for myself (e.g. npm ≈ Maven but this is how they differ), but high quality ones can be hard to come by & takes a long time to fully grok a new language


I think this is true for just learning the language alone. But for the type of productivity the grandparent post is talking about the language ecosystem can be a bigger challenge, at least for me.

It wasn't in academia, but I spent a while working on one of the major native front end platforms almost exclusively and had the opportunity to get very familiar with it. And by 'familiar' I don't mean just knowing where to look things up, I mean knowing many of the exact APIs by heart, and having a wide variety of things I could just type in (without much looking things up) even if it involved working with images, network requests, JSON, file IO, string processing, dates/scheduling, collections, serialization, menus, windows, controls, input, drawing, sound, etc.

Another big thing was when there are multiple ways do the same thing, having done it both ways and having direct experience of why one choice will likely be more effective in general or for a particular project. This can apply to choosing libraries/frameworks or app architectural decisions.

I noticed I missed this level of familiarity quite a lot when I switched to a different ecosystem later. Maybe this is worse for native UI development because of the large API surfaces. But I found needing to consult the documentation again to be a lot slower.

Having had that experience has kind of made me wonder about full-stack vs specialization, and optimal-language-for-the job vs standardizing (although of course some languages are totally inappropriate for some jobs). At least in my own experience I found sticking with the same language for a while to have continuing productivity benefits well beyond the 1 week/month period.


And for a specific example of the sort of ecosystem stuff I am talking about, to save/load some local data on iOS you have some API options:

Core Data

SQLite

NSArchiver

NSKeyedArchiver

NSUserDefaults

JSON

Property lists

Protocol buffers

stdio

NSData

memory-mapped files

Realm

This is without getting into anything too obscure. This list is probably also out of date.


> But for the type of productivity the grandparent post is talking about the language ecosystem can be a bigger challenge, at least for me.

My claim was that:

> learning a new language - and yes, in-depth, including idioms, stdlib, some external libraries, maybe some framework (if needed), and also some facts about the implementation and its inner workings[1] - takes a week at most

I also tried to define "learning in-depth" in another post:

> To me, 'knowing in depth' (as a language user) means that no matter the question, you know where to [quickly] search for the answers. There's no need to remember the answers themselves, although it kind of happens naturally with repetition anyway.

> On the other hand, it's also important to know which questions are not worth answering. [ie. what to ignore while learning]

And I stand by it: I believe that there's no need to memorize too many details to be productive, you just need to be able to quickly and accurately find the relevant details, no matter which details are they. Various docs indexes and viewers, your IDE features, cheatsheets printed on a wall - all of that can help you if you forget a bit of syntax or a signature of a function. There's nothing, other than just reading a book or two, to help you if you don't understand a crucial concept in a language or the architecture of the library/framework.

Also, all these concepts are reused all over the place. For example: "Io is a purely-object-oriented language with prototypal inheritance which allows objects to have many parents." Each word here has a meaning, and that meaning is (well, mostly) standard across most programming languages. With this description, if you know all the words, you just learned 3/4 of all there is to Io OO. Another example: "Dylan is a purely-object-oriented language which is class-based, allows multiple inheritance, and also decouples methods from classes by relying on generic functions, which use multiple dispatch - similar to CLOS." This one is longer, but it's still a single sentence, which conveys most (or if you're familiar with CLOS - all) of the characteristics of Dylan's object system. Sure, there are obviously more features in Dylan that you need to know before you start coding... but they are all defined with a single sentence. It takes half an hour to go through them all, and - again, if you know the exact meaning of each word - at this point you know more about the language than a beginner programmer would learn in a year or two.

> I mean knowing many of the exact APIs by heart

I was like this in the past, so I know what you're talking about. Unfortunately, my epilepsy makes my memory reset significantly from time to time - I can't do it, or at least not for long. It caused a loss of productivity for a bit, but it went back up when I started using well-configured tools. This is why my definitions of "knowing in-depth" above are what they are - I live them, for better or worse.

> (without much looking things up) even if it involved working with images, network requests, JSON, file IO, string processing, dates/scheduling, collections, serialization, menus, windows, controls, input, drawing, sound, etc.

Yes, that's what tends to happen with repetition - unless your memory is impaired in some way - you just remember things. There's nothing else than repetition that can get you to that point, which also means you "only" need repetition to get there. In other words, it happens naturally with time, provided you consistently work with the given tech stack, and that you use it for the various things you mention.

But again: good tooling makes the rote memorization mostly unnecessary, and I really don't see many benefits of keeping all the idiosyncrasies of the whole stack in your head. It's completely different story if you code in Notepad without Internet access, though.

Further, all the things you mention are implemented in all general-purpose languages. Moreover, most implementations look really similar. In almost all languages "network requests" are built upon sockets - an OS-level mechanism. Images are more tricky because of decoding/encoding (which is handled almost everywhere by simply linking to libjpeg & co.), but after that they're 3-dimensional int array/vector/list/what-have-you (yes, there are other representations - and yes, they are all implemented across most PLs and have very similar characteristics everywhere). "file IO", too, is based upon OS-provided streams, or a crippled reimplementation thereof. It's the same everywhere. "collections" are almost language-agnostic: the names and interfaces may (slightly) differ, but the underlying data structures are the same everywhere. "input", "windows", "controls", "drawing" are event-based everywhere, and almost all languages have bindings to all the GUI frameworks. You can write GTK+ app in Python just as well as in OCaml or C# (even though GTK itself is C (I think?)) - and you'll get the same names and signatures, even.

You'll see for yourself in the decades to come - you're likely to switch the stack at least a few times by the time you retire. You'll see that most divisions in programming are illusions and - quite simply - matters of taste/personal preference, while in reality the differences between the popular languages, stacks, framework and systems are miniscule.

> Another big thing was when there are multiple ways do the same thing, having done it both ways and having direct experience of why one choice will likely be more effective in general or for a particular project. This can apply to choosing libraries/frameworks or app architectural decisions.

These are mostly language-neutral - the exact same considerations apply when deciding on library/framework or approach in every single language.

> I noticed I missed this level of familiarity quite a lot when I switched to a different ecosystem later. Maybe this is worse for native UI development because of the large API surfaces. But I found needing to consult the documentation again to be a lot slower.

It could be because of lack of proper helpers in your editor etc., but either way: if you stick with the new stack, you'll learn it naturally with time. You'll forget some of the previous stack (although I'm frequently surprised how I can recall specific quirks in technologies from 20 years ago.), which is also natural. Memory just seems to work this way. It'll take a few months to remember all the relevant details - less if the new tech is similar to something you already know.


I'm not really disagreeing that it's possible to be productive without memorizing all that (and I've since moved on to other technologies myself). Also I'm not a vim holdout, I use JetBrains IDEs with all the bells and whistles and use refactoring and keyboard shortcuts etc. But I feel like I was substantially more productive after spending a (relatively) long time on the same stack. So I'm just providing another individual data point that probably more closely matches dxbydt's experience.

Those sort of details I learned from extended use weren't really important for understanding things conceptually. But of course actually implementing something involves going through all the minor details and making lots of small choices. Being able to just breeze through that was really nice.

To use your example of a network request: on iOS it may be true the API is built on top of sockets or kqueue or whatever (I believe they've moved to user-space networking now but I don't really keep up). But you could use the toolkit APIs blocking on a background thread, or integrated with the main event loop with a delegate object, or on the event loop with a callback block. There's trade-offs to different ways of doing it (especially on a team project): some ways are more verbose, some introduce more thread-safety risk, using blocks everywhere may increase the chance that someone generates a memory leak through cyclic references, processing large requests could be more performant on older phones on a background thread, some patterns make it easy to cancel the request, etc.

I don't really miss iOS development. But I do miss the ability to go through all those details/trade-offs as quickly, and I kind of want stick with one thing long enough to develop that again.

Especially because as you observe many of the underlying concepts are the same anyway (especially among the popular languages that are the practical choices for most projects because of ecosystem/tooling concerns). To extend dxbydt's analogy I'd rather focus on the dish (problem-domain) than the cooking instruments.


> But I feel like I was substantially more productive after spending a (relatively) long time on the same stack.

That's obvious, although the "substantially" part is a bit vague. But it's still obviously true - you will be slower if you need to check the docs often, it's a given. But:

> I kind of want stick with one thing long enough to develop that again.

It should take just a few months at most. You'll get there much faster than you (I assume, sorry if I'm wrong) expect. :-)

Or is it that you're already many months after the switch, and you still feel that you're checking the docs very often and it slows you down? Maybe there's something wrong with the docs, then? Otherwise, you should probably try learning in a more structured manner, like reading a book or doing MOOC.

> But you could use the toolkit APIs blocking on a background thread, or integrated with the main event loop with a delegate object, or on the event loop with a callback block.

Exactly the same options are present in every general-purpose language. There are quirks, like Python and threading or JS and event-loop, but you get the choice of blocking/non-blocking, and later if the non-blocking is based on threads and synchronization or on an event loop (or coroutines, green threads, CSP, Actors, etc.) The only thing that changes is nomenclature, which for some reason language developers like to reinvent all the time.

> some ways are more verbose, some introduce more thread-safety risk, using blocks everywhere may increase the chance that someone generates a memory leak through cyclic references, processing large requests could be more performant on older phones on a background thread, some patterns make it easy to cancel the request, etc.

Ok, some of these are indeed specific to a given platform and have no direct equivalent in (as many) other languages and platforms.

Yes, you need to learn those - as there's no equivalent in other languages and stacks, you have no choice but to learn, no previous knowledge will help you with them.

However, these tend to be higher-level concerns, where understanding the concept is still more important than memorizing all the relevant details. You write: "some ways are more verbose", which is obviously true, but IMO it's more important here to understand what "verbosity" is, what trade-offs it presents, and what is the "correct" level of verbosity for the task. Knowing this, finding the solution with the right amount of verbosity is as simple as opening a bunch of libraries' GitHub pages and quickly glancing on the code examples there.

---

All in all, I'm not disagreeing with you at all. You're right that remembering a lot of details makes you faster. You're right that there are considerations unique to the stack of language.

What I'm saying is it's not hard to remember all the essential details: you only need spaced repetition, which happens naturally as you work. Further, while unique features for a lang or stack exist, they are very rare, and it's not hard to learn them (exactly because they're so different than the rest, which makes them stand out). All the other - non-unique - features and consideration form a large pool of concepts which are frequently reused across many (or most, depending on the feature) languages and stacks. Internalizing this pool of concepts lets you effortlessly switch stacks and languages; and while you're right that the productivity will drop after the switch, I assert that with a bit of effort it'll get back to normal levels after a short time - just a few months (for the real mastery it would take longer, of course, but that's true always and for every kind of work).

Anyway - thanks for the discussion, I enjoyed it very much, thank you. I hope it wasn't too boring on your side :-)


I mean, I basically agree with you, but I like your challenge.

What about Rust's borrow checker? I considered it novel, but I'd love to find out it was stolen from somewhere.


Rust’s borrow checker was inspired by clean [1], which applied the same semantics to monads to allow in-place updates

[1] https://en.m.wikipedia.org/wiki/Clean_(programming_language)


Clean is not a mainstream language.


He didn't claim it would be, though. He just said, for every feature implemented in a top20 lang, you can already find it implemented somewhere else, 10-20 years ago.


That holds for every mainstream X. It makes the challenge ridiculous.

Here's a challenge: come up with something that people generally think is fundamentally new. Given enough HN commenters, someone will present an example of how something super similar to that thing was already there 20-30 years prior.


It's especially relevant in PL or Databases. DB2 did such foundational work in database theory, and Python 3.6's new dicts finally made some very simple database theory finally available to users.

In PL, there's a massive amount of awesome languages that pretty much existed for the purpose of writing a PhD or doing something funky with semantics. It takes /really really/ long to get into an industrial strength language. Look at Rust for example, it's the first incidence of real Algebraic types for a non-gc language (excluding C++), letting you do wonderful things like https://github.com/lloydmeta/frunk

Some of my favorites are

http://bloom-lang.net/

https://www.luna-lang.org/

https://www.propellerheads.com/en/reason (i contend it's a fantastic programming language for the task)

https://cseweb.ucsd.edu/~wgg/CSE131B/oberon2.htm


> That holds for every mainstream X. It makes the challenge ridiculous.

But that was the point... :(

Or put another way, the major languages only include the "middle of the road", conservative features: mostly safe, uncontroversial, and well-specified and tested (it's all "mostly", "approximately" so, there are obvious exceptions). There is a limited supply of such features, which makes all the major languages have non-trivial amounts of overlap in terms of concepts or implementations. Moreover, even if one language implements a truly unique feature, it gets copied the next day (to where it makes sense), leading to even more similarity between languages.

The other part of what I'm saying is that if you research and learn as many unpopular languages as you can today - right now - you'll be covered (in terms of having to learn new features in your job's PL) for at least the next decade without any additional effort.


> That holds for every mainstream X

I thought that was exactly the point klibertp was making. The paragraph starts like this:

> Because there is very, very little unique features to each language. Unless you go to the fringes, you're bound to see the same concepts applied again and again and again


I think it comes from Clean language or maybe Cyclone or other safe-C dialects. Not 100% sure though - I gave up on Rust some time back due to the churn and still didn't get to go back to it :( - and anyway, at this point the borrow checker is more than 10 years old, so hardly a new feature :D


Structural typing wasn't around in mainstream languages before Go and TypeScript.


I assume you were taking klibertp up on this:

> try naming any "new" feature which was added to your favorite (EDIT: I meant mainstream - TIOBE top 10 or 20 - lang here!) language recently - and I'll show you that same feature implemented 10 or 20 years ago in another language(s).

This is a fun game and I'll answer for them on this one: OCaml (1996) has structural types.


> Lately I've confirmed its because they know exactly 1 language , but they know it so well & in so much depth, they know exactly where to look, the right mcmc library, the right slice sampler, the right optimizer, that's what matters.

That sounds like your "issue" isn't knowing too many languages but that you don't know any one language as in depth as they do though?

I don't see why you'd want to unlearn anything.. doesn't that knowledge only help?

I think there are benefits to learning more languages but also support the idea you should have at least one you're super comfortable with.. thats not mutually exclusive!


Not sure know how far you may be into your career but as one who’s had a couple decades, it’s definitely possible to have both. I have been able to learn two very different languages (C and Haskell) to that level of comfort and am feeling nearly there in Rust. At the same time, I feel reasonably comfortable picking up a project in any of about 20 others with frequent reference to standard lib docs.

With only my own experience and that of people I’ve worked with to go by I can’t provide any broader scope, but I feel like it really is the case that a critical mass of familiarity with different languages and ecosystems makes it far easier to pick up and run with others and be more or less unaffected by the differences. It is probably important that the languages in your set be actually different though, rather than superficially different as most historically-popular languages have tended to be.


I agree. If most of your experience is with a popular OO language, consider learning a lisp. Learn an ML. Try elixir/erlang to get a feel for how different programming for the BEAM can be.

I suppose you don't _have_ to use different languages to learn new concepts, but it certainly helps in some cases. For example, learning about currying is going to be much more natural in F# than it would be in C#.


This reminds me of something Rich Hickey said about becoming a better developer:

https://gist.github.com/prakhar1989/1b0a2c9849b2e1e912fb

“A wide variety of experiences might lead to well-roundedness, but not to greatness, nor even goodness.”


Yea this is 100% nonsense, and is borderline gatekeeping. Well roundedness allows me to identify a particular language/tech stack that is most suitable for a task and then I can just drill down on it and get productive very quickly.


Agreed. And to address the grandparent comment. Its entirely possible that he is approaching his coding with a professional mindset and wasting time on things like maintainability, scalability, or similar just out of habit.


It’s a waste of time to use classes, if all you’re doing is hacking together 500 lines into a stable enough heap to generate the graphs for a paper.

Not all academic code is that bad, but the heft is definitely towards that end of the pool.

I got really familiar with the distinction, because I spent a while translating PhD project/demo code into actual products for $JOB. It’s definitely mostly in making things secure, stable, maintainable, and debuggable that you lose most of the time.


This a thousand times over. Academic code will take every shortcut and simplification possible. Hardcoded values, globals, no modularity, no error checking, shared-everything architecture assumptions, single-letter variables with mixed naming convention, anything to get the job done.

Once the paper is written, the data analyzed, the project is done. There is no such thing as "maintainability" because the code isn't used past publication.


In mathematical code, single letter variables are often the clearest. "Descriptive" names only obfuscate the meaning further, because the meaning is in the math.


I've translated several mathematical papers into code, and I must strongly disagree. The very first thing I do is translate glyphs into names relevant to the domain I'm applying the math to. It makes the rest of the process immensely easier.


I guess we'll have to agree to disagree then. I too have coded up a lot algorithms from academic papers. In my mind,

yk = C * xk + D * uk

is a lot clearer than

position_at_time_k = output_matrix * state_at_time_k + feedthrough_matrix * input_at_time_k.

The first is an idiom. The second is not.


That may make sense internal to a library, where you’ve established idioms.

But as a counterpoint, I only know what you meant by the first expression because I read the second expression.

I actually agree with you in large part, that short variable names can have more meaning within an established set of idioms because they allow you to parse whole statements at once. But there’s a trade-off involved, because mathematics can take symbology further than that’s useful.

For example:

    E[i=0;5](i**2)

    sum([i**2 for i in range(0,5)])
So it often comes down to a matter of taste.


Why not something like this?

    position = C * state + D * input
Reasoning: k is the only subscript, so it can be dropped. Meaning of C and D are implicitly defined through their function wrt to state and input, so its OK not to name them. It also keeps the structure visible similar to that of the math.


At this point I think it's the ecosystem that you pick and not exactly the language feature.


> cook something else, not just rotate dish-ware.

As a developer you are the chef. If all you can make is spaghetti, is it the kitchen's fault? OP makes the point explicit: the language shapes the way you approach problem solving.

When declarative, functional and procedural programming all yield the "same fucking spaghetti", you might just have missed the point.


It just sounds like they know the domain space better then you. You seem to know less about the domain of a poisson clock then them.

Maybe because I'm also unfamiliar with it, I can't judge. But knowing the right set of algorithms to solve domain specific problem sounds language agnostic to me.

That said, I agree that you should also continue to learn about CS, learn more data-structures, more algorithms, explore newer techniques and paradigms. That includes learning new languages, but not only that.


> Lately I've confirmed its because they know exactly 1 language , but they know it so well & in so much depth, they know exactly where to look, the right mcmc library, the right slice sampler, the right optimizer, that's what matters.

It sounds to me like you're working in a highly specialized area that requires deep domain knowledge and rich set of tools to apply that knowledge. Your colleagues have a lot of familiarity with that domain and those tools. Knowing multiple languages isn't your problem, it's just that you haven't spent your entire career working in this particular domain and you're playing catch-up.


I work for an airline. I known about constraints. If I knew only one language, I never would be at the level I am. Here is the tech stack I deal with daily.

VB6 Kix scripts Powershell .net 3.5 Winforms WPF .net 4+ .net core .net asp Java JavaScript React Angular C++ Golang.

If programming is a job and not your craft, it will be harder for you. You just have to practice more.


> VB6 Kix scripts Powershell .net 3.5 Winforms WPF .net 4+ .net core .net asp Java JavaScript React Angular C++ Golang

That just sounds like some mess that one is forced to deal with, not reasonable software engineering. I mean yes, some jobs will drown one in useless stuff. Doesn't mean that learning the useless stuff is a virtue now.


What, exactly, makes you think any of that stack is useless or unreasonable? Do you know what all the pieces do? I don’t have a clue from a single comment, but I have enough experience to know that all stacks came from a series of reasonable decisions, and more importantly, specific problems to solve.

So, TBH, my gut reaction to your comment is you might lack experience developing & shipping any large applications. That’s not an insult, and not a judgement; experience comes with time. Plus I don’t know your experience, I’m just letting you know what your comment leads me to assume. My only suggestion is to be a bit careful throwing around judgmental words like mess and useless and ‘not reasonable’ when you don’t know precisely what you’re talking about.

FWIW, this stack looks very normal even for a web-only application, and it looks simpler to me than the stacks & tech people use for shipping console games. Hell, I’ve written personal projects with tech stacks that have as many pieces.

If I’m off the mark about your experience level, you could make your case stronger by demonstrating that there’s a simpler cleaner alternative that solves the same problems, provides the same or better performance, build utility, maintainability, deployment, user experience, etc.. Do you have any suggestions?


Frankly, that particular enterprise is probably screwed and will be stuck with their legacy code until the end of time.

This is what happens when one gobbles up whatever Microsoft throws over the fence... and it's clear they haven't learned their lesson because .NET core is on the list. There is no simpler, cleaner, alternative for them. By their nature such companies will (almost?) always end up in this situation.

Any developer working for them must now learn VB6, ASP.NET and Powershell and other great future-proof tech. That is, once again, not a great career move and certainly not an argument for learning many languages.

For the sake of discussion, that entire stack could probably have been kept to C++, Java, HTML5 and something for scripting assuming a talented engineering team.

C++ would have covered their Windows desktop needs from Windows 9x until today, including going cross-platform if needed.

Java would have covered cross-platform desktop apps and back-end, likely including whatever they're using golang for. Had they wanted more agility on the BE they could have gone with Python which doubles as scripting language.

HTML5 is self-explanatory. No need for fancy schmancy React or Angular which I bet they'll have to replace (or rather append to) in 5 years time.


It looks perfectly reasonable to me. VB6, KiX, Powershell and .NET suggest some part of the overall system is done through Windows desktop apps and Windows computers automation. JS, React and Angular imply that at least two frontends are involved (or possibly one under rewrite). Java, C++ and Golang are probably used for other parts.

It's only small and well-defined software that can get away in being written in one technology. For anything larger/serious, you're bound to end up in a polyglot environment.


KiX is funny, had to look that one up, and it wasn't easy to find: it's apparently some closed source "careware" batch scripting language developed by an MS employee.

Yes, most environments use several programming languages. Done right, this can be Google's blessed languages approach. Done wrong, it can be every technology Microsoft ever brought into existence, ending up with a triplicate solution for each problem.


I had to use a jar in a .net program. Do you know how much of a pain in the ass that was?


That is very nasty joke sir.


Sometimes our daily job tasks are nastier jokes than we would like to admit.


Is IKVM not a thing anymore these days?


Not really, and last time I checked there were no plans to ever migrate it to core.


Woa, bummer! That stuff was amazing. It opened up the entire Java ecosystem to .net apps, in an impressively seamless way.


Here, the project end announcement.

http://weblog.ikvm.net/


I'm not familiar with the author, but I didn't take the article strictly from the perspective of the labor market. There are people who program as a hobby, for example. For those who are interested, it can be enlightening to see how different language designs lend themselves to different tasks.


The author wrote 'Writing an Interpreter in Go' for reference.


And a follow-up book about a compiler.

https://interpreterbook.com/

https://compilerbook.com/


I didn’t make that connection. That’s a very good book.


And it can be even more interesting to learn about operating systems, networking, algorithms, etc, etc.


" its because they know exactly 1 language ,"

which language is this? I'm guessing Python.


Asked myself that question too but think the answer may be: "it doesn't matter, because that's not the point".


Probably Matlab


It’s funny you say spaghetti and list two tools that were invented to eat noodles (chopsticks and forks). Even after it’s invention, it took centuries to improve the fork with a third and fourth tine. It seems hasty in an industry so young to say pick one tool and get great at it. We wouldn’t be in the same place as an industry if we didn’t accelerate web development with oo, dynamic languages, various innovative frameworks etc. a poison transform doesn’t change but the http spec and its requirements do (try writing the asynchronous code that http2 requires on the languages and frameworks of a decade.


You did it wrong. What you are supposed to do is learn a bunch of languages and master one or two.


That's probably because those languages are completely middle-of-the-road, with a load of inconsistent APIs that one has to learn along with the syntax itself, and that don't really expand one's programming ability.


that's the difference from being skilled in many eras, than focused on one single goal and be damn good at it.


Is the language, or the languages library?


why don't you share the code that they wrote? I'm interested in seeing the code produced by people in academia. Anyway, once you reach enough mastery in your craft using one language or another is just a difference in deciding what level of abstraction you can work at and this is determined by performance requirements.


"The code produced by people in academia" is about as specific a target as asking about the diet of people in Portland. It ranges from a strict regimen of Taco Bell to pesca-pescatarian.


I mean sure, if you have limitless time then feel free to learn a bunch of programming languages as well as everything else you could want to learn. But real humans have opportunity cost. If you are a software engineer, the time you spend on skill development can already cut across many, many dimensions. Additional programming languages is just one, and I'd argue a fairly narrow one after you've touched on a few key language paradigms.

What about:

- Architecture

- Software Delivery

- Networking

- Project management

- Interaction design

- Visual design

- Human factors/Social systems

- Graphics/Art (2D/3D)

- Market validation

- Sales & Marketing

- Business planning and finance

- Application domain knowledge

- Operations and Monitoring

- Data infrastructure

- Analytics

- Machine Learning

- Machine Vision

- Computer Graphics

- Simulation

- Game Engineering + Design

- Information Retrieval + Recommender Systems

- Embedded systems/Control theory

- Optimization

- Scientific Computing

I mean there is basically an endless list of areas you can reach into in the limited time you have time focused on skill development as a software engineer beyond "on the job" training. Building a strong, broad "stack" of skills seems like a good investment. Learning new programming languages is a niche within a niche depending on how expansive a scope you set for yourself as a person creating software for the world to use.


I would argue that for software engineers, the fields you mentioned would be adjacent fields, while other languages would give deeper insight into the tools of the very trade you're in.

In a time of people going for T-shaped careers, another language is deepening the vertical bar, while another field enriches the horizontal bar.

In that sense, learning a new language and learning a new field are complementary, but different in essence.


Learning a new language doesn't necessarily deepen the vertical bar if the language cannot really be used to improve productivity/innovation on top of an engineer's current toolset. Learning TypeScript on top of Javascript can be thought of as vertical, but learning say Lua or C# on top of JS is probably better described as horizontal unless you're already intending to do some really specific desktop application.


Learning different concepts from other languages can make a big difference. For instance I learned about the value of composition over inheritance by learning Rust, then applied it to my life as an Objective-C developer (prior to Swift). It forces you to break out of your well tread paths, and take the best of other systems and fold it into yours.


Coming from a procedural dynamically typed language like PHP and then learning rust, clojure, and node JS. All have huge benefits.

Rust ownership and types system teach you about the freedom it affords you when reasoning about values in a system.

Clojure teaches you about separating state from logic and the benefit of keeping it at the edges of a system.

Nodejs teaches you about async and programming which is imo, as different as functional is to OO. The way you need to reason about things is very different. The non blocking needs teach you about what types of things are blocking and which are not.

I took all these lessons back to PHP and my systems are massively improved as a result.

Most of the PHP hate comes from people dealing with PHP code written by people who simply don't know how to program.

That is not a defense of PHP, it has many faults, but it's a language like any other. I have problems, big ones, with every language and ecosystem I've ever been exposed to. That doesn't detract from their benefits or the concepts they can teach you.

Also, it takes like 20-30 hours to get mediocre with a new standard library and syntax. You won't "learn the language" but you'll get a good feel for it.

Arguing time cost as a reason to avoid learning new languages is pretty weak when you're spending a career programming.


I don't think there's a binary state on if an area of study is adjacent or not, but a spectrum that is unique to the individual. Depending on your career goals, learning areas that don't directly contribute to the production or quality of the lines of code you write could be more "core" to your skills and non-adjacent compared to learning an esoteric programming language you may never even use to build something with.

Given the leverage developing software can have, and the ease to which it can be deployed globally, I err on the side of assuming that the breadth of skills that are "core" to a career which includes a focus on writing code to create software to be potentially quite broad.

I think I certainly take a generally unorthodox viewpoint in that I have a hard time swallowing the idea that a single person shouldn't be able to consider the skills to design, build, deploy, operate, and iterate on a single piece of domain-specific software as "core" to the job of being a software developer. 20 years ago, it was generally normal thing to do that, with a huge number of software tools (often shareware) developed by a single individual or 2-3 person teams. Today, its much rarer, perhaps except in a few domains like indie game development or open source infrastructure products/frameworks. Projects may start that way but it's generally assumed that when it becomes time to get serious, you need to staff up and delegate to specialized workers.

Even within the generally accepted scope of the domain of software development I find it hard to understand the justification for the separation between "front end" and "back end" engineering -- beyond the fact that the tools today have grown full of incidental complexity, making it hard to get the breadth of knowledge needed to be effective, it seems clear that a person building a single integrated system is going to build something different than a number of people building a system where Conways law informs the architecture due to specialization and communication boundaries.


This is why we need practical immortality. There's just too much fun to be had.


learning rust might teach a javascript programmer a lot about performance that they can apply to their javascript. learning f# or rust might get a c programmer comfortable with ADTs which one can leverage in C with a little work. i do agree though that taking this too far can just be a waste of time. maybe pick up ONE new language outside of whatever domain you are currently in, see if you learn anything useful.


At this point I have almost 20 languages that I've used for at least two years, which is what I consider an decent bar for "knowing" a language.

The problem I now have is that I don't know if I'll ever be able to master any language anymore. Mmmmaybe C, since I've used that fairly consistently, albeit on-and-off for 25 years.

But whenever I get to an "if" or "for" or function declaration, I often have to look at an example real quick because I have too many fighting memories: is it "if () then {}" or "if then:" and is it "else if" or "elif"? Do I need parens around the clauses? Is it "&&" or "and"?

Mostly I've found that the difference between two languages isn't the difference between two cheeseburgers, but rather the difference between a cheeseburger and lasagna. There's absolutely personal preferences (I still despise Python's whitespace-as-scope, even though I love the language), but they all get your belly full.


> I don't know if I'll ever be able to master any language anymore

Even with just a handful of languages in my toolbox, I feel this way too. A mixture of unease and anxiety.

In addition to syntactic variations, I find that the notion of writing idiomatic code in a given language amounts to more thinking overhead which eats into productivity. I'd need to be programming in the same language over a long period of time for idiomatic code to come more naturally. Can't seem to just instantly switch like some talented folk out there.


I run into the idiomatic, or even "regional dialect", issue all the time. I've had many coworkers look at me like I'm stupid when I ask them how they like to implement a certain, basic algorithm. It's not that I don't know how to get it done, I'm just wondering how they like to structure their code, as I find it more important to match style than impose my own, generally speaking.

If I don't have to match styles, and can just write the code as I feel like it, I can work so much faster.


I don't care so much about writing idiomatic code anymore. For example I tend to write python like javascript. Mostly only use lists and dicts (JSON basically), while ditching the whole OOP/class concepts for the most part. What I prefer is more or less a language-independent style, which makes it easy to port code at least between languages with similar paradigms.


> For example I tend to write python like javascript. Mostly only use lists and dicts (JSON basically), while ditching the whole OOP/class concepts for the most part.

that is exactly how you are supposed to write idiomatic python


Surely that depends on what you are writing.


Try clustering similar languages into groups centered around shared features, then try to come up with high-level statements which hold true for all the languages in a group. Then try to derive the individual differences within the group from these statements. Use mnemonic techniques as required and prepare cheatsheets if needed.

One thing to realize, though, is that syntax - outside of a few special cases - is the most trivial part of any language. It's 100% acceptable to forget the syntax of a `for` loop in one language, as long as you still know that, in that language, the `for` loop is actually a for-each construct working on sequences of some types and additionally it's an expression which returns a sequence of results, making it equivalent to `map` higher-order function. Now, I described the `for` loop of (for example) Elixir, CoffeeScript, Racket, Common Lisp (`loop` with `collect`) and F# and Scala (with `yield`). As long as I know that a language I'm using right now belongs to this group, I can plan my implementation around the semantics outlined above. Then, when it comes to writing the code, I can just look up the syntax, or more commonly - just make my editor autocomplete and snippet-insert the relevant bits for me.

So, my advice would be to first learn and understand as many programming language features as possible, focus on their semantics, and then group the languages you know around the features. The syntax is really a trivial matter, and "mastering it" (ie. having the whole grammar constantly in your head) is not actually necessary in my experience.


I agree, I don't really sweat it, and the seasoned engineers I respect totally get it. But mastery can be fun, and while I'm developing a kind of "meta-mastery" (I touched my first Go production codebase two weeks ago, and was able to fix a bug in it without really cracking the books except for Interface{}), I miss plain-old mastery that other fields can achieve.


I'm this way with foreign languages: Spanish and french. I know french very well, and learning/speaking Spanish is significantly harder because I'm always using french words by mistake. sure it'll get better with practice, but interlingual dyslexia is real.


Interesting, I feel like I have the opposite issue. Coming from Portuguese, learning Spanish was a breeze and I can usually surprise French speaking people with “hard” words in my basic French sentences — just because I reached out to a Portuguese word and “frenchyfied” it.


Going from native Portuguese, then French, and only then Spanish (yep, I've gone to a crazy school), I got much of the problem the GP was talking about. I don't think I've ever spoken so much French as the time when I was leaning Spanish.

There's something with those two languages in that they interfere badly.


Spanish and Portuguese seem to be unusually close. Studying one helps me with the other.

Portuguese and Japanese on the other hand...


The OP wasn’t talking about Japanese. They were talking about European languages- many of which share a common heritage


Not all of them do, but the ones mentioned were all very similar Romance languages.


I’m aware not al of them do. This is why I said “many” and not “all”.

There are roughly 3 main groups of European languages: Italic (or Romance as you described it) for Western Europe, Germanic which is predominantly Central Europe, Scandinavian countries and the UK; and Balto-Slavic for Eastern Europe. Generally speaking of course.

However there is still a fair amount of cross pollination even with the Germanic and Italic languages, not to mention shared characteristics (not least of all a shared alphabet) that doesn’t exist with Japonic languages such as, well, Japanese.


I think this dyslexia happens more with non native languages.

Like when I was trying to speak german, after just some time in France. My native language is Portuguese


German and Portuguese differ significantly more than French and Portuguese, so that might be affecting your experience…


I’m not too familiar with the Southern European languages, but I am quite familiar with the Northern European languages. That is where I find the opposite of your difficulty happen, where all those languages look like a different flavour of German to me, so they were very easy to pick up, because I could just read Swedish slowly without ever looking at a textbook or dictionary, and over time I found myself quickly able to write it.

Differences between people, or differences between cultures perhaps. Having grown up in the Netherlands I did already come across Frisian and Limburgs, which are also slightly different but if you keep reading or listening you just pick it up. So I don’t know, keep practicing I’m sure you’ll get Spanish!


It's true that learning one Latin language (like French) makes it easier to understand another latin language (like Spanish), and the same is true of learning similar programming languages. But the point isn't that it's easy to approximate what is being said, it's that when you are speaking, all your prior knowledge confuses you.

For example, if you start speaking (not listening to) Frisian and Limburgs, you might find yourself throwing a few Swedish or German words in the mix.


I have walked up to a shop keeper and asked [french(Can I have a loaf of)][Chinese(bread)]

I'm white so I'm sure this was even more confusing for him


When I started learning Mandarin (my fourth language) I became totally unable to speak French (my third language) as the Mandarin words would pop up first in my mind. I think by now they've been sufficiently separated in my mind but I haven't had any use for my French so it as atrophied anyway. I'm now learning Cantonese which is similar to Mandarin and I'm only occasionally mixing it up with Mandarin. It seems to me the distance between languages isn't so important, you can mix up any two languages or not. It probably depends more on how much your brain has to rewire itself to acconmmodate the new language?


Perhaps try learning sufficiently different languages? For me it's hard to make cross-language mistakes except within the {} groups:

C, {all kinds of assembly}, {C++, Java, C#}, {Python, Ruby}, {Haskell, OCaml}, Prolog, {VHDL, Verilog}


One really striking thing is going between {Go, C++, C#, Java} and getting the sources of documentation confused between each one. Not even the languages themselves, but the ecosystems, can be really confusing.


{CL, Scheme, Clojure}, {all kinds of shell scripting, maybe Perl}


Know all those except Prolog, plus a couple of Lisps - it just makes it worse ;) I agree that the more disparate the languages, the easier it is. And if I use one language consistently for a year or so, I get pretty keyed into it and only rarely try to figure out why it isn't working because I randomly typed "if not X {}" in Go ;)


I thought I was the only one. Also class and method names : is it array.length(), array.len(), array.Length, array.size(), len(array), Len(array).


Thank goodness for Intellisense style autocomplete, and hover documentation.


This is one reason I love PowerShell's design that everything is case insensitive.

For casual scripting, it's so nice to not have to care if it's length or Length. At least memory can help me if it's len/length but length/Length is no distinction at all.

(Case sensitivity is one of my pet hates, for anti-human UX).


Interesting. I've come to basically the opposite conclusion and find case insensitivity to be counter intuitive. "Foo" and "foo" are not the same set of ascii characters; why should they map to the same thing?


"Dog" and "dog" are the same word, they are read the same, and pronounced the same. "D" and "d" are the same letter of the English alphabet, an alphabet with 26 characters not 52 characters. Why should they map to different things just because of some implementation limit behind the scenes in a computer?

Why should the ASCII table be the defining characteristic, over and above the way humans have used English for decades? ASCII "Foo" and UTF-16 "Foo" are not the same set of bytes, should they map to different things? "Foo" and "Foo" are not displayed with the same set of pixels, should they map to different things? "Foo" and "Foo" are not stored in the same memory address, nor were they typed in the same number of milliseconds over the same USB packets, why should the ASCII table internal detail matter and those details not matter?


As a fun counterpoint about case sensitivity in English, consider:

- The man spoke to God, saying he had made poor decisions.

- The man spoke to God, saying He had made poor decisions.


IDEs are useful for this. Let them keep track of minutia like syntax rules and names of common but differently-named methods. I find also find that it only takes me about a day of picking a language back up to remember which way they've gone on most of these things.

This kind of gets at what I think is the big advantage of learning a bunch of languages: it gives you instincts for which things are minutia and which aren't. The things you listed that every language has but does differently, which are annoying to figure out and remember, that's the minutia.


I only know 5 or 6 but have the same problem. Dropping in semicolons into python. Forgetting brackets in JavaScript.


Too similar to keep them apart and too different to treat them the same. I don't see a solution apart from regular repetition, do you?


At the end of the day it's more about applying the right patterns. A language is just a hammer. But you won't make good furniture without design skills. So it does not matter that you need to Google up things and forget things.


> Mmmmaybe C

Lol - I remember years ago bouncing through javascript, python, lisp, prolog and others, all using ";" differently; then coming back around to C, and despite once knowing the spec by heart and having written C parsers, writing some one-line test programs... because I just could not believe that semicolon was a statement terminator - it looked so "this just isn't right". :)

> [syntax]

But when swapping languages, I had more trouble with cognitive interference on higher-level constructs. Syntax can go in cheat sheets[1], and idioms gathered in example files. Badly organized and incomplete documentation (once common before the programming community exploded in size) can be overlaid with tables of contents and notes. Because remembering how to find things in each languages' documentation was for me a major pain. But for designing say large apis, you have to remember things like some type-system path does look pretty... but only until you hit some language-misfeature monster that lives on it. And also not shy from some approach because of a misremembered or misattributed gotcha from another language. Though maybe that's easier now with so much discussion of best practices, and so much code available to read.

> cheeseburger and lasagna [...] they all get your belly full.

Or alternately, that they're all shambling toxic wretchedness, but you choose the one which seems likely to poison the customer the least, cooking it as well as circumstances permit. Cockroach popcorn and fried millipedes can be tasty. And even with swill milk... gypsum plaster is non-toxic... it's the other adulterants, little nutrition, and absence of sanitation that burn you. I do love programming, but I so look forward to less crippling languages.

[1] http://rigaux.org/language-study/syntax-across-languages.htm...


The best technique I've found for staying sane when working in a large number of languages is to configure your editor to highlight errors based on language. If you use perens where you're not supposed to, your editor should tell you right away. No need to go find an example.

This becomes less useful when it's "which library do I use" or "what is the idiomatic way to do this in X language".


I know a bunch of languages too, or at least I claim that I know, because I've used it for some time in the past. And I do not remember the most of them, in the sense that I cannot right now start writing in those languages. But I can start to write C or rust without need to take a look into a tutorial or something like, because I use them routinely.

After lisp I've lost the idea of syntax as of an inherent part of language. You know, all that stuff, that lisp's s-expressions and their memory representation are mapped into each other seamlessly lead to a conclusion that any of them is not important, there is an abstract idea of a lisp object while conrete representations of lisp objects are just some practical ways to deal with them in different situations.

So the official language syntax is a one of the practical ways to represent ideas using that language. The most of languages do not bother to have a second representation, but it doesn't matter. Syntax doesn't matter. You can learn it on a whim in a half an hour of leizure reading.


I wrote a working compiler (learning type exercise level) effectively in pseudo-code resembling Java once because I didn't know the syntax very well, then commented it all out and translated it to Java line by line. Surprisingly, it had no bugs that I ever knew of. It definitely freed me up to think purely about the logic.

You might just throw down whatever and let the next compile/interpretation cycle let you know if you didn't get the syntax right.


Yes, esp some languages are deceptively similar: if you know C, then PERL, Javascript, and PHP looks very similar, but it is hard to remember the differences.


In case anyone isn't familiar with them, these two books[1][2] from the Pragmatic Programmer are great for doing just this. They offer a guided intro to 7 different languages each, and they're a whole lot of fun.

[1] https://pragprog.com/book/btlang/seven-languages-in-seven-we...

[2] https://pragprog.com/book/7lang/seven-more-languages-in-seve...


I loved the first one. It's one of my all-time favorite technical books. I was really disappointed by the second, though. I'd recommend skipping it. The second lacked the cohesion of the first probably owing to the fact that it wasn't written by the same author as the first. And each chapter was written by a different author.


I learned programming from books like The Waite Group's Turbo-C Bible around 1989. In trying to teach my oldest son programming through using modern books, it seemed much more difficult. IDEs and languages change so rapidly that books (and even online tutorials) are dated quickly. Menu options change, links stop working, etc.

It made my son more frustrated when he was first starting.


The first book led me to discover Erlang, which literally changed my life. Great resource.


Could you please elaborate on how it changed your life? I am interested in learning Elixir and wondering if you can give some words of encouragement, assuming it changed your life positively :)


Falling in love with Erlang led me to create a Twitter account to share information about it, which helped me find and land a job with Basho, which was by far my favorite job, even if the company ended badly.

So, that part is difficult to replicate.

Setting that aside, Erlang finally helped me understand what functional programming is about (I'd tried and failed to grasp Lisp on a few occasions), taught me the value of immutability and asynchronous message passing, really opened my eyes to the fact that there's a vast world outside the tired Algol family tree.

Sadly, pattern matching and immutability have made it very hard for me to enjoy programming in other languages. Most of my development work after that has been in Python, which is not only the least exciting language I've used in a very long time, but also lacks most of what I came to appreciate about Erlang.

Erlang's constraints (primarily immutability in this context) makes it so much easier to reason about and troubleshoot code.

It's also a good language for helping get opportunities to talk at conferences. People keep hearing about it without knowing much about it, so those talks tend to be well-attended.

Elixir is a perfectly acceptable language, although the syntax and other design choices turn me off, personally. Erlang is a very concise language and helps me think in Erlang; anything that looks like Python/Ruby/C/Java/etc just feels wrong now.


nice books


I spent a lot of time learning Haskell early in my career. I doubt I'll ever write an actual program in Haskell. I probably won't write so much as a single line of it that will ever see production. (Thank goodness. Haskell is far from perfect, despite its many zealots.) But learning Haskell has changed the way I write code in any language. It's helped me to write better, more testable, more readable, more maintainable, more bug-free Javascript. I wrote a lot of Clojure for a while a few years ago and being familiar with Haskell dropped the learning curve of Clojure down to almost nothing. I personally adore Lisps more than Haskell, but Haskell unlocked the world of functional programming for me. It also showed me the true power strong typing has at a time when the only strong typing I was aware of was the dismal type system of Java. (Like late 90s Java at that!)


I also really like Haskell but except for one consulting customer I only use Haskell for side projects.

Recently I started a side project with the deep learning part in Python and the rest in Haskell. I ended up just week ago converting the Haskell part to Common Lisp. Much faster dev, but I have been using Common Lisp since 1982.


> learning Haskell has changed the way I write code in any language

I think that's the bottom line. The creator of Haskell confirmed and explained: https://youtu.be/iSmkqocn0oQ?t=22


I couldn't agree more, but if I could make a recommendation. If you already know a mainstream OO language, you won't get much benefit out of learning another in the same arena. Going from Java to C# you will learn less than going from javascript to ocaml or Java to Haskell.

The best thing (for my own learning) I ever decided to do was learn Haskell. I have never been paid to write code in Haskell but it gave me such a confidence with languages that I don't think I've seen any language features in any other language that has ever surprised me or that I felt like I couldn't learn. Haskell and it's language extensions will expose you to many many different ideas. It helped me learn Rust, I feel like I can read ocaml, any other functional language doesn't feel like a stretch to read, etc

So, don't just learn many languages. My advice would be to pick languages across paradigms and learn them. Don't waste your time learning 5 object oriented languages.


Failing to learn how to program in Haskell made me realize that I had absolutely no understanding of what I was really doing when I was coding and sent me down a deep rabbit hole learning all the computer science concepts I hadn’t ever learned as a self taught programmer.


I had the same experience with lisp, and it's one of the reasons I chose to keep programming (and eventually major in CS) during college.

It's crazy how easy it is to learn how to pattern match well enough that you can build really useful stuff while still not really understanding what you're doing. Human brains are amazing.


What would you recommended first? I'm also self-taught, just JavaScript, typescript and react and a tiny bit of shell scripting. Job never asks for anything more, but I want to get better at designing, engineering, and building scaleable webapps. So more towards "full stack" I guess.

Haskell sounds super cool, but it seems like nobody really uses it to build things on the web. They mostly just talk about how it "helped them think".


Haskell actually could help a lot with JavaScript. Functional programming, writing stateless functions, function parameters, and generally manipulating functions, are valuable and useful concepts in JavaScript and React, and Haskell forces you to learn and understand them. Alternatively, F# could fill that niche if you prefer its syntax.


ReasonML is a good transition from JS to something like OCaml


> Going from Java to C# you will learn less than going from javascript to ocaml or Java to Haskell.

Depends on how you'll use C#. If you'll write Java-style OOP, you'll indeed learn very little. But C# offers much more than Java: generics, FP, LINQ, async-await, dynamic, native interop, unsafe and pointer arithmetic..


I disagree, regardless of 'how you use' c# going from Java to Haskell will be a paradigm change whereas going to c# is an incremental change


Out of curiosity, if you were to point out the focal languages on each paradigm, what would they be ?


Classic OOP: Smalltalk

Metaprogramming / what OOP could have been / interesting error handling: Common Lisp

Static Types/FP: Haskell (or Ocaml)

Logic Programming: Prolog

Untyped FP: Clojure (or Scheme)

Actor Model: Erlang (or Elixir)

CSP: Clojure w/ core.async (or Go)

GUI Development: Lazarus (or Delphi)


The sibling comment did a great job pointing these out. I agree with most of the points.


So I fully agree here because techniques from one language often lead to much better technique in a language of a different paradigm.

For instance, I now use many of the functional techniques I’ve learned in JavaScript to write less, easier to read code instead of more loc of imperative code. I’ve also worked at a company that had many functional patterns implemented in php, and frankly made the language quite decent.

In terms of the time issue, I spend about 15 minutes a day just before bedtime learning new languages. I probably manage to do this 4 days out of every 7.


> In terms of the time issue, I spend about 15 minutes a day just before bedtime learning new languages. I probably manage to do this 4 days out of every 7.

Do you find this actually works? Personally I've read entire books for a language and then gone to write some code in it and realized I knew almost nothing. The only way I've found to learn a language is by writing it.


Yes, though I tend to then go and use those languages for small suitable tasks pretty often.


> I’ve also worked at a company that had many functional patterns implemented in php, and frankly made the language quite decent.

Oooh, got any examples?


Not OP but:

https://phptherightway.com/pages/Functional-Programming.html

https://github.com/mtdowling/transducers.php

Don't forget putting a function into a function, gives you behaviour polymorphism at runtime, without inheritance or class based dependency injection


The company I worked at was Facebook when it still ran on pure php. Much of the functional toolkit was written by Evan Priestley, who also built and still works on Phabricator.


My suggestion is Objective-C. It's just so weird to anyone who's never coded in it, that it will force you out of your comfort zone into new ways of thinking. The NS framework is probably the most well designed stdlib of any language ever, and the whole thing is inspired directly from Smalltalk. Protocol Oriented Programming [0] was also a real revelation for me, and I still apply those concepts anywhere I can. It's also a great introduction to lower level concepts like pointers and memory management that is a lot more forgiving than C/C++.

[0] https://www.sicpers.info/2015/06/protocol-oriented-programmi...


My thoughts on weird are APL, Smalltalk, Forth, Haskell, Prolog, and Lisp. They're all weird in a good way though.


Obj-C will always be a dear example to me, because after people praised it so much on HN in contrast with C++, Java I expected it to be something at least a little bit special. When I actually had the chance to use it, I found a mediocre and pretty error-prone language. And then the cherry on top, Apple unceremoniously flushed it down the drain...

This taught me to take any programming language recommendations from HN with a huge quantity of salt and also mostly ignore Kay-style OO purists.


I'm sad to see that you feel that way about the language; since I had exactly the opposite experience. Objective-C is a beautiful (if syntactically verbose) language, and I prefer it to C++ as an "object-oriented C". I'm actually curious what you tried doing with the language, because

> I found a mediocre and pretty error-prone language.

makes it seem like you didn't get a chance to dive down into the runtime and how the message-passing model works. FYI,

> Apple unceremoniously flushed it down the drain...

this is not true at all. Most of the code that Apple themselves writes is Objective-C.


why would you supoose that C++ is an object-oriented C ? the primary paradigm in C++ is generic programming, not OO.


Modern C++ kind of.

Gang of four book was written with Smalltalk and C++ examples.

Back in the 90's we had Mac OS PowerPlant, CSet++ on OS/2, OWL/VCL/MFC/ATL on Windows, Motif++ on UNIX, Telligent, and a myriad of ORM, distributed computing, image libraries and what not written in OOP C++.

Then came Java, took the best practices out from OOP C++, and two decades later 90's C++ is known as Java OOP and people act as if C++ OOP never happened.


Maybe now it is, but before STL etc. were thought of, C++ was intended by its creator to bring better support for OOP to C.

See http://www.stroustrup.com/bs_faq.html#why


Objective-C is also the Stackoverflow 2019 Developer Survey's most dreaded language.


This is mostly because Swift exists and this is where app development, Objective-C's main niche, is moving towards. Note the verbiage describing "dreaded":

> Most dreaded means that a high percentage of developers who are currently using these technologies express no interest in continuing to do so.


I feel the highest good I get out of playing with alternatives is that I hit a higher rate of "hits" on picking up on things previously invisible to me.

Playing with a different editor gives me ideas for different workflows and shortcuts which my main editor already supports. Playing with a different programming language shows me how powerful X is which I never touched in my main programming language.

The focus on learning and exploring an alternative is more powerful for me than mastering that alternative. I get the greatest benefit with a relatively smaller investment that way.

Maybe instead of learning more programming languages, just pick small bits of something to explore.

This might make for an interesting app (which won't make money.) Create a sort of "useless but interesting facts" type of app for programmers. Allow people to submit "cards" with something on it and then up/down vote on the card. The snippet could be a term, clever code, or anything bite size. The programming equivalent of "how to say ck in Klingon."


It is really fun to learn new languages and you certainly should know a language in each of the major paradigms. But beyond that I think learning languages might have diminishing returns compared to tackling other "axes" of difference, e.g. learning new platforms, frameworks or environments.

The conceptual differences between a desktop GUI app, a command line app or a web app are much bigger than between say functionally similar web apps implemented in three different programming languages. A business app based on a relational database is fundamentally different from say a game, even if the language is the same.


This is true, though if you've only ever built a web stack in Rails with JS, then trying to build a stack in Haskell will be quite educational both back end (how you deal with side effects on Haskell and how purity leads to different code architecture) and on the front end (stream-based event driven UI instead of stateful spaghetti)


I think there's a bit of a threshold with language learning. When you know a handful (or just theo necessary ones for your job), there is probably a bit of tendency to be religious about your "stack". When you know a few more than you need, then you're driven to pick between them for any given thing. Then, you start picking languages you don't know and learning them to do some thing you need done, and so on.

I wrote some data automations recently for a client. Here are the languages and DSLs that ended up in the mix: cron, Makefile, bash, sed, regex, jq query, python, R, sql, nginx conf.

I think especially if you're a vertically scaling kind of gal, and you have a fondness for conciseness, parsimony and efficiency; you'll just end up attracted to many language solutions. I just think there is no better way to write less code or keep the semantics of your language more relevant to the task in hand.


"Here are the languages and DSLs that ended up in the mix: cron, Makefile, bash, sed, regex, jq query, python, R, sql, nginx conf"

And also anyone who could replace you will have to know all these languages. Nice approach to job security.


> cron, Makefile, bash, sed

For a programmer with a minimal amount of linux experience, it likely takes a week to catch up to what the codebase is doing with these.

> regex

There are good tools to destructure regex.

> python sql jq query

I consider those as "given" or "learnable good enough in a week"

> nginx conf

Don't know this.

Overall, "knowing all these languages" seems overblown in the choice of words for me. Java, Python, Prolog and Clojure? Hell yeah that would take quite a developer to replace! But the above? There is 1 proper general-purpose language in there, the rest are well-known tools with great examples online.


> There is 1 proper general-purpose language in there

That will depend on what the GP has done with SQL. Also, JQuery implies in Javascript. There are up to 3 general-purpose languages in there, one of which nearly any developer will know.

There should be plenty of people capable of taking the codebase, but it's not a trivial single language case. The upside is that good developers will be overrepresented when compared to the population that knows one of Python or Javascript.


Judging by the surrounding context, they were probably referring to jq[1], the delightful command line DSL for manipulating JSON, not to jQuery.

[1] https://stedolan.github.io/jq/


> That will depend on what the GP has done with SQL.

I am sure they built their UI with SQL - as opposed for example simply doing some queries, or some joined procedures at best.

> Also, JQuery implies in Javascript.

With uttermost likelyhood, they did not only dynamic changes of some html-documents there, but also their backend and crypthography.

Enough with the snark... ("Language" is such a fuzzy term anyways. You could go as far as to label any interaction with a computer as one - there will always be a protocol according to which the interaction happens.)

> There should be plenty of people capable of taking the codebase, but it's not a trivial single language case. The upside is that good developers will be overrepresented when compared to the population that knows one of Python or Javascript.

I still stand by my point. As far as the cost of polyglotism is concerned, that is a one-general-purpose-language-codebase and the rest being DSLs (apparently R too). There are other codebases where there are really 2+ languages. I.e. some functional lang on top, some performance critical code hand-written in C, some logical programming language to do some constraint programming. Each of those 3 takes quite a lot experience.


You can actually build your UI with SQL, e.g PL/SQL.


I had a gut feeling you could ;) Interesting.


Check the history of Oracle Forms and APEX products.


Also sorry if I came of as rude, did not mean to!


JQ is a JSON specific query language . Not related to JavaScript.


There is a difference between “knowing a language” and knowing the frameworks, idioms, ecosystems etc.

I could learn Swift in a week but that doesn’t mean I would be a competent iOS developer.

Someone who knew Python couldn’t do anything useful with the type of automation that I do with Boto3 without knowing the intricacies of AWS.


2 general purpose languages: R and Python.

Multi language solutions make sense with languages that specialise. So I'd never write in "Go and Java" for example, or "Clojure and C#" or some other pair of general purpose languages.

For me, Python usually ends up being the glue. But otherwise, I use a tonne of special purpose stuff. E.g:

AMPL

Prolog

R

Lua

SQL

sed/awk/grep/jq/etc

C++

Makefile DSL

cron DSL

Docker DSL

etc.


Now if you multiply these hypothetical two or three weeks by the hourly rate of someone who can learn, you'll get the cost of replacing the author with another employee.

My point is that the cost could've been much smaller.


In my eyes, you neglect the opportunity cost of "using one tool for everything": You end up with a solution that is much harder to mantain, since the wheel will have been re-inventend many times over and sprinkled with bugs along the way. Given a problem of fixed complexity, at some point costs will rise when fewer tools are used.

In summary: There is a "too much" and a "too few" when it comes to the number of tools used in a project. Where in that spectrum the OP is falling, we can't possibly know without knowing more details. Maybe the cost could have been smaller, maybe they hit an optimum and costs couldn't have been reduced by fewer tools (or more tools).


True.

Still, bash and (most likely) sed could've been easily replaced with Python.


That’s good or bad dependent on the world you want to see.

In my opinion, engineers should have an ample toolbox for much the same reason that a good carpenter has a tool for any given thing.

If well tooled engineers is what you’re after then you’ll be replacing one with another; with few surprises.


From the client's perspective it must be bad because they cannot task their own employee who can program a little bit with making minor changes to your solution.


Not as uncommon as you are suggesting I would think. These particular tools and languages are often learned as a group, because they support each other.

(R is perhaps the exception, but it does actually work really well here, because it's the best tool for certain kinds of table mashing and chart creation.)

Little command line tools, sqlite and short, focused programs all tied together with a makefile is a really nice way to arrange a data processing project.

You end up with lots of intermediate output tables to look at, each of which is produced with a small step. This makes for easy testing and debugging.


I think especially if you're a vertically scaling kind of gal, and you have a fondness for conciseness, parsimony and efficiency; you'll just end up attracted to many language solutions

There's a question I really want to ask HN related to this.

In the compression challenge (how well can you compress this data), the forbidden cheat is to put all of the data inside the "compressor/decompressor" and have the data be a single bit. The way to block that is to mandate that the total size is compressionTool+data size combined so you can't just move the data around.

So is there anything like that for comparing programming languages? If something is "easy" because it's implemented in a large runtime environment you have to ship as well, that's cheating. If it's easy because the language is powerful and well designed, that's not cheating.

If you like "conciseness, parsimony, efficiency", is the entire Python stdlib and the entire R library and a SQL engine and NGINX consise just because you hid all the implementation details in them, and they let you write your "compressed data as as a single bit"?


Well, you’re not maintaining the runtime environment ... if you’d like the minimum bytes for everything then consider writing your next app in assembler :)

The purpose with multi language programming is articulating your solution the the least amount of code .


I agree. People try to analize logs by coding in Java for example. Just use an R package, you will probably write 2 or 3 lines of code in R.


Frankly I've never understood the desire to be a polyglot. The time spent learning new syntax and a new standard library could have been spent learning libraries, algorithms, data structures, etc, so I don't think it's the most productive way to improve yourself as a programmer. I'm more interested in learning a new language because it lets me do something I couldn't do before.


It should be qualified that you should learn languages that are actually conceptually different. You could easily learn 20 languages that are practically identical as far as the actual concepts are concerned. The difference, to you as a programmer, between C and Python, is much smaller than between Haskell and either of those, or between Haskell and Idris.

Covering a larger volume of concepts is what should be the emphasis. That is definitely useful, maybe even more useful than just memorizing stock algorithms and data structures, especially if you engage with the mathematics behind the languages.


Agreed. For example, I spent a couple years using JavaScript as my main language at work then changed jobs and learned OCaml. If I need to use JavaScript again, I'll do it in a much different (IMO better) way than before I understood functional programming.


I always had the feeling that the hard stuff about programming wasn't libraries or syntax, but how to structure code conceptually – especially if you want to keep it flexible and future proof.

Learning Rust for example gave my C and C++ a noticable boost, just because Rust made some topics unavoidable that I managed to subconsciously avoid for years when it comes to C, C++.


My impulse to learn new languages is primarily driven by my desire to be able to read code that other people are writing, not necessarily to use it myself. For example, I'm not a huge fan of C++ or x86 assembly, and I doubt I will ever write anything significant in either, but I learned C++ so I could read what other people were doing with it, and I learned x86 assembly so I could understand reverse engineering tools better.


At work we use C#, Java, C++, Python, BASH, powershell, Javascript, PERL, etc...

If you only want to work on C#, you're limiting your mobility here.

Personally, I never get bored because there's so many interesting projects in different languages.


Are we working together? :)

Yep same here.


We could be, y'all hiring? :)


I know four useful languages well - C, C#, JavaScript, and Python. I know C++ as it existed shortly after the STL came out.

As far as knowing a modern framework and an ecosystem around the language to do something useful, I would take C off the list. I haven’t done anything useful with C since MFC/Win32.

If you want to do anything with the web, you have to know Javascript. C# is my favorite “serious” big project language but it’s way too heavyweight for simple scripting. For that Python is my go to language.

But, I’m not going to spend my limited free time learning any new technology that doesn’t directly help my career.


I only got my current job because I happened to be reading a book on C#, which is strange because we barely code in C#. I spent several years programming in Elixir which got blank stares from recruiters. But saying I know Elixir and C# actually got me a pretty good gig!

In an ideal world, knowing data structures and algorithms would get job interviews, but without being able to at least speak to some specifics of the language being used, a good number of HR departments will screen you out before you’ll get to the data structures and algorithms part of an interview.


Why would knowing “algorithms and data structures” be ideal? In the real world, most jobs aren’t about knowing either. They are about translating business requirements into code.


...and everyone knows code has nothing to do with algorithms and data structures.


Well, if you’re doing yet another software as a service CRUD app or another bespoke app that will never be seen outside of a company - like most developers - knowing:

Given a binary tree, return the level order traversal of its nodes' values. (ie, from left to right, level by level).

Isn’t that useful or how to invert a binary tree.

I would much rather you show some competence in the language we are using. Knowing leetCode isn’t going to help if we need an iOS app....


Trees - although not necessarily binary ones - are everywhere. If you don't know about them, your CRUD will explode if objects can form a multi-level hierarchy. Don't make too light of CRUD apps - there's complexity there, too.


Everything you're saying is true, but for some reason companies still think they need to test you on hackerrank/leetcode like problems. I would much prefer to do some code in the frameworks I claim to know.


Not companies I interview for. The last time I had any type of algorithm type interview was 1999 when I was applying for a job as a low level cross platform C bit twiddler. Since then all of my interviews have been a combination of soft skill, tell us about your experience, white board architectural discussions type of interviews. Of course they asked me technical questions about the language and stack they were using.


Depends very much on what that CRUD app is doing.

Knowing algorithms and data structures pretty well, is the difference between an update button taking a couple of minutes, or milliseconds.


And I would even say that’s not true most of the time. Why is the update button slow?

- Is your customer in Asia and your servers are in us-east-1? Do we need a multi master database one in each region? Can we make even that faster by doing an eventually consistent write?

- do we really need a synchronous update process or can we use queues to make it more consistent?

- is our web server slow? Should we scale horizontally or vertically? Should we use autoscaling and if so which metric should we use? What should our cooldown time be between autoscaling events? Do we need to autoscale across regions? Where is our traffic coming from?

- is our database indexed properly? Did we look at our slow query logs? Did someone do something stupid like have triggers on our database unnecessarily? Is an RDMS the right choice? Do we need to denormalize the table?

- or is it our code?

This is the thought process my manager was looking for when he interviewed me. Not the best way to traverse a tree.


Answer 5. Update implemented as O(n!) for the input dataset graph because the coder didn't knew any better.


But with a complex system. How would you know where the bottleneck is if you don’t know how to instrument your entire system and how would you know the possible solutions?


By doing a proper architecture design and data analysis before writing a single line of code, instead of coding away without any sense of direction.

Delivering a good result out of that process, requires knowing algorithms and data structures tailored at the problem space.


Right. Like that’s going to happen in an environment where shipping code on two weeks sprints is expected. Even in your perfect world where this does happen, it’s not like you could possible know what type of bottlenecks or usage patterns will happen until you get real users using your code.

Are you suggesting we go back to a waterfall approach and not get fast feedback and learn what works as you are developing?


It does happen in two weeks sprints, that is what refinement planning, spike stories and research sprints are all about.

And in case you missed, the large majority of companies that actually moved into agile, nowadays are doing what we could call scrum-waterfall.

Plenty of bottlenecks and usage patterns are already clear from reading the RFP documents and preparing the respective sales pitch offer.

Surely if one codes away without thinking about overall system architecture, like the TDD proponents, then these problems aren't possible to predict.


So within those two weeks while you are “researching” how are you going to know real usage patterns with real users? Is your research going to perfectly predict where all of the bottlenecks and optimizations need to be in the entire system?

Are you going to perfectly predict the size and number of VMs that you need? The size of the database? Where your users are and the average latency? Are none of your developers going to mistakes that aren’t apparent until you are running at scale?

There is more to architecting a system than just “code”.


Sure there is more to "architecting a system than just “code”.", which is exactly my whole point.

Performance is a feature, it doesn't get retrofitted. There is only one shot, specially in fixed budget projects.

While a perfect design is an utopia, and there will be surely some unforeseen problems, not designing at all is even worse.

To calculate the initial set of VMs, database size, average users, network latency, you name it, it only requires reading the RFP requirements, having technical meetings with all partners about those requirements, and having a team that knows their stuff around CS.

If it already clear from deployment scenario that at very least 4 VMs will be needed, or that a DB node will need 100 GB on average, it would be very risky just to do on the go.

As for running at scale, that should already be obvious from RFP requirements, unless we are speaking about startups dreaming of being the next FANNG.

MongoDB is a very good example of running at scale without doing the necessary engineering, but they do have a good marketing department to compensate for it.


Performance is a feature, it doesn't get retrofitted. There is only one shot, specially in fixed budget projects.

So you’re saying it’s not possible to add indexes, increase the size of your database, increase the number of read replicas, increase the number of servers in your web farm, reconfigure your database to be multi-master, copy your static assets to a region closer to the customer or add a CDN after an implementation? I must be imagining things that I’ve been doing with AWS....

To calculate the initial set of VMs, database size, average users, network latency, you name it, it only requires reading the RFP requirements, having technical meetings with all partners about those requirements, and having a team that knows their stuff around CS.

So “knowing CS” would have helped us predict at one of my previous companies that our customer was going to more than double in size in less than a year through an acquisition? In fact this has happened at two separate companies. The other company we more than doubled in size and revenue literally overnight.

Will “good CS design” help us predict how successful our sales team will be in closing deals? We are a SAAS B2B company where one “customer” or new implementation from a current customer can increase our volume of transactions by enough to have to increase the number of app servers or with enough implementations increase the size of our database cluster.

If it already clear from deployment scenario that at very least 4 VMs will be needed, or that a DB node will need 100 GB on average, it would be very risky just to do on the go.

So now it’s “risky” to click on a button and increase the size of our web farm by increasing the desired number of servers in our autoscaling group or is it risky to click on another button and increase the size of the VMs in our database cluster? The number of app servers we have for one process goes from 1 to 20 automatically based on the number of messages in the queue. As far as storage space, if we need a terabyte as our client base grows instead of 100GB, I’m sure AWS has some spare hard drives laying around that they can give us. But transparently adding space to a SAN even on prem has been a solved problem for a long time. Even back at a previous company where we would boast to our client that we had a whole terabyte of storage space.

As for running at scale, that should already be obvious from RFP requirements, unless we are speaking about startups dreaming of being the next FANNG

Again, you don’t have to dream of being the “next FAANG”. Mergers and acquisitions happen. Getting new clients happen (hopefully). When you are a B2B company especially when you are a SAAS B2B company with a decent sales team, a sale to a couple of “whales” can mean adding more of everything.

Also the RFP is not going to tell you that your company that you are planning on implementing and hosting a solution for x users will need to be able to scale to handle 2x in a year after a merger closes. Should we have 5 or 10x the capacity now in anticipation for our sales team producing or should we scale up as needed?

MongoDB is a very good example of running at scale without doing the necessary engineering, but they do have a good marketing department to compensate for it.

I had no problem with the scalability of Mongo at a previous company. What type of scale do you think in your experience is too much for Mongo?


Life is beautiful when one does time-and-material projects.

Every half backed release can always be improved later, at customer expenses.

Likewise, not everyone is doing button clicks on AWS to scale their compute center, and a proper knowledge of distributed systems is required in order to do correct scaling.

Mongo DB problems are well known across the interwebs.

I am not going to change your mind, nor you will change mine, so lets leave it here.


Every half backed release can always be improved later, at customer expenses.

So now, it’s an “improvement at customer expense” to add servers and increase the size of servers? How long do you think it takes to do everything I listed to add scale? When I say it’s logging into a website and clicking a few buttons, I am not exaggerating. Of course, in the modern era you modify a cloud formation template but that’s an implementation detail.

Likewise, not everyone is doing button clicks on AWS to scale their compute center, and a proper knowledge of distributed systems is required in order to do correct scaling.

Whether you are button clicking on AWS or using a data center, adding resources is the same. Increasing the size of your primary and secondary databases is the same on prem. It takes more effort and the turn around time is higher to provision resources, but it’s not magic. Everything I listed except for the CDN is something I’ve worked for a team that did on prem. I’m sure a lot of people can pipe in and say they have done similar things on prem or at s colo with Kubernetes and Docker. But that’s outside of my area of expertise.

Mongo DB problems are well known across the interwebs.

I am asking about your personal experience not what you “read on the internet”.


In my opinion it's a good thing to have a point of comparison to other languages when writing code, there might also be concepts prevalent in other languages that your primary language doesn't commonly use but would be useful. It's really similar to learning multiple spoken/written human languages.


If you knew Ocaml, when Java generics rolled around, they were no big deal. If you had a handle on Haskell, Rust's traits are familiar.


New concepts that you can then you anywhere though.


But also learn one or two languages that fit your domain of work, really really well. I have seen too many programmers that justify their lack of skills in their main language they are supposed to know by bragging about their knowledge of some hipster language they rarely use. "my code is way too slow, but hey I know the syntax of dependent types in Agda..."


I learned many languages for years and I received a poor return on time investment. Later, I started learning AWS in depth and studying for certs. I now know way more than I previously did about networking, envelope encryption, messaging, infrastructure as code, pipelines, caching, CDN’s, networking protocols, load balancers, dns, and designing highly available and scalable systems. In summary, don’t forgo learning about systems to chase a hot new language.


Learn shell, a scripting language like Python, and a systems language like Java, C, Rust, Go, etc. I think that’s all you need to know as a backend or systems/DevOps Engineer.


“That’s all you need to know” is very reductive. Never stop learning: add JS and functional languages such as Clojure, Scala, OCaml. You don’t need to use a language every day for it to be useful. Learning a bit of Objective-C taught me a lot about naming variables. Smalltalk taught me about non-class-based OOP. Forth taught me low-level stack operations. Prolog, Io, BASIC, Erlang taught me a lot of things I use everyday, even if I don’t use the languages themselves.


The more languages you learn, the more you realize they have more in common than not. At some point it’s time to stop learning 20+ languages and start building stuff. I never said to stop learning, but be very careful what you spend your time learning. If you can be world class at Python alone, able to solve just about any problem with it, it’s better than 100% knowing many languages. Knowing a language doesn’t mean you know all of the standard libraries and third party libraries, much less solve challenging problems efficiently.


> The more languages you learn, the more you realize they have more in common than not.

That happens if you learn many similar languages, which is a waste of time. You have to learn languages in different paradigms to learn new things and discover languages that have nothing in common.

> At some point it’s time to stop learning 20+ languages and start building stuff.

You can’t learn a language without building stuff.


“You can’t learn a language without building stuff” <—- the title of this post implies not only that you can, but should. I believe it’s a waste to learn without hope of applying what’s been learned. I’m learning TLA+ right now because I plan on using it to build more robust systems.


Throw in some basic HTML, css, and JavaScript as well.


and system isn't rocket science, it's been coded by good engineers knowing the limit thus gives you a good abstraction for you to learn it quickly. that is, get hands dirty and start coding. Look at those who were _so_ good at Novell's NetWare admin goes? Loot at those who were _so_ good at openstack goes? And no more longer, you can see those who were _so_ good at k8s goes XD Look


Based on my personal experience, I'd say there are good "buckets" of language-context pairs and you can pick one from each according to taste.

C/C++/Rust/Ada on bare metal or systems work for building abstraction upwards from the hardware.

Clojure/Scheme/Common Lisp/Racket: A good dynamic language that's extremely composable, to build abstraction downward from human logic.

ML/F#/Haskell: Powerful type systems that can layer abstraction on abstraction.

I have no experience of the following, but I imagine at least cog and erlang belong to similarly mind-expanding buckets.


I've drawn slightly different buckets for myself / experiences:

A high performance systems language: Asm/C/C++/Rust

A garbage collected language (JIT or compiled): Java/C#/Go

A scripting language: Python/Perl/Ruby/Lua

I think one from each bucket and you'll be able to excel anywhere.


Instead of learning more programming languages, learn more programming languages paradigms. If you know OOP, try functional programming; if you use dynamic types, try static types as well.


What a terrible advice.

Does author has trouble with discipline?

You don't get anything by learning more and more programming languages. Programming languages are tools, be expert at 2 or 3 languages and that should be enough. Learn anything more to solve a specific problem.

You understand the crux of a language by being expert at it not by "me too" novice at it.


> You understand the crux of a language by being expert at it not by "me too" novice at it.

Let's not pretend "knowing" a language well is akin to a 10-year-journey like some arcane samurai art.

If you

- could build an interpreter for a minimal version of the language

- can expand most syntactic sugar into more minimal constructs of the language

- can reason about the language in usual PL terms (call-by-value/call-by-name, pure/not-pure, strictly/dynamically typed, ...)

- know the 5-10 most important milestones in the history of that language

- know the standard libraries so that you don't repeat code that is written there,

then what use is there to master a language further? If someone is experienced in language learning, the above can be accomplished for nearly any language in idk, a year? At that point of mastery, it makes much sense to learn another way of thinking instead of memorizing the official language specification verbatim.

A programmer with 3 completely different paradigms to think in will be much more effective than one with just one paradigm to think in. Time is much better spent learning new paradigm than to gain that last bit of mastery.


The "we use the right tool for the right job" mindset looks good on paper but doesn't scale very well.

Most of the time it tends to favor developers who are the most distracted by the newest and shiniest trends.

It is helpful for any dev team to have 2 or 3 programming languages in their toolbox that they can use to solve their problems. Any discussion about adding a new language to that toolbox would need to involve discussions about QA, deployment and long-term supportability. Unfortunately most developers are less concerned about those "non-technical" aspects.


THIS 100%! ...I'm an extremely undisciplined never-finish-even-starting-almost-antyhing ADHD-I crazed squirrel-brain that never has enough out of learning "just a tiny bit" of some new programming language, new tech, or even entirely new field, but hardly gets good at anything.

Sure, learn one or two languages from completely different paradigms than what you use daily, to broaden your mind and seed stuff in context. But the... STOP! And get more projects finished faster and better instead, you'll learn 100x faster this way, and learn more useful things.

Then learn some time management and communication skills...


touche


I dislike this reasoning with passion. I tend to think the pleasure of learning a new programming language is rarely productive, except for newbies. I also think that our world would be a better place with fewer languages, we really have a lot of useless redundancy.

What we need, most of the time, is a good library tailored to solve a specific problem (string manipulation or complex maths could even be done in BASIC or asm with a powerful dedicated library)

Creating a new language is like forking a code base.


I cannot agree more.

Because of knowing many language helps me a lot. Example, the import module system of Python is amazing. I'm not a fan of it at all but I really like how it were design. When I come to Ruby, I think Ruby need some love for its module.

Or JavaScript binding? Even if arrow function, it does has place you cannot avoid writing `.bind(this)` and you ask a question why we have to do this?

Then come pattern matching of Elixir/Erlang and it blows my mind and I just want to have that ability every where.

Then come Elm/Hashkell or any language that use `whitespace` instead of commma/parenthesis and I just love how natural these language read.

``` hello(username, country) ```

compare with

``` hello username country ```

The more I learn those languages, the more I appreciate the people who invented these and always thinking of different way to do thing.


Word, especially the seemingly alien and/or ancient ones such as C, Forth, Lisp, Smalltalk, Haskell, APL etc.

And once you know enough different languages; design and build your own [0], even if no one will use them.

[0] https://github.com/codr7/g-fu


I’d add make sure you’re learning languages in very different fields. IE, you won’t get much out of learning Ruby after Python (some, but not much). But C or Haskell? Definitely. Also don’t automatically be afraid of old languages. People still write a lot of new things in FORTRAN for instance (really!). And you can make better COBOL jokes if you read it’s wikipedia page at least. Also, languages for describing hardware or FPGAs can be mind bending (Verilog or VHDL), and GPU languages are an excellent thing to have in your toolset whether it’s just for making pretty pictures or doing machine learning. Also learn at least one assembler/bytecode, even if it’s just a vm one. Whatever language you learn make sure it adds something to your toolset you couldn’t do in another language.

The whole “10x programmer” thing probably is based on warped ideas for the most part, but I would guess that if you have a colleague that’s ten times more productive than the rest of the team it’s because they understand and can reason about parts of the stack and machine others can’t


I learned just some rudimentary Haskell. It tends to force you into a kind of bottom-up-test-as-you-go approach, which can work pretty well when you have no hidden state. The next time I wrote a shell script, I did it like that, and started writing much better shell scripts. It also surprised me how much easier that flow worked with the shell than with Python, even with Python's REPL. To do the equivalent thing in Python, testing functions nearly interactively, I had to do something like import module; reload module; from module import * over and over. At the shell prompt, I just kept hitting the up arrow to source the script I had just saved so I could try out the new function.

I wrote a trivial amount of Haskell, and it still changed how I write shell scripts, which I didn't expect at all.


Given my limited memory abilities, I have doubts that I'll ever have a master level or even very strong grasp of more than 2-3 languages at any point in time. C, C++, python, Clojure, and Java are usually in that circular queue (both when I was in school and now in industry), but I find myself humbled every time I come back to one after 6+ months. I imagine that if I were to truly sink 10 years into a single environment, especially a nearly stagnant one like C, I'd carve out some deep pathways in my brain.


Did you find yourself writing Java differently after writing Clojure?


I find myself not writing Java now thankfully :)

My day job is embedded systems programming for aerospace, so it's almost entirely C/C++ with some tooling in Python.


I just make everything like Smalltalk anyway so what’s the point.


Well, it's not a problem, but if you ever want to "fix" it, just take some problem so solve by writing Smalltalk in Prolog.


See Logtalk: https://logtalk.org/

Works with many Prologs, too.


I mostly agree with the article but for the reason that I did not see clearly reflected in it. I think knowledge of multiple languages (and technologies in general) is important not from a perspective of trying to be an expert in many things (which I think in general is counterproductive) but because it helps one to be better (and, arguably, much better) within their domain.

Few examples to illustrate my point:

- Having some experience with strongly typed language makes Python developer a better and safer programmer. Compare with Python developer who is not even aware of the weak vs. strong typing issues and all the related gotchas. Well worth the investment.

- Having seen the ease-of-use and power of some data structures (e.g. Python dicts) not natively available in C may suggest to C developer to look for libraries that implement something similar. 10 minutes playing with Python may end up saving countless man-days on a large C project.

- Having even minimal experience with NOSQL DB may suggest to a DB admin that handling unstructured data on their Oracle cluster may not the best way to go.

- Having seen FPGA latencies may suggest to a Java developer not to bother with the software that will be competing on latency.

The list is endless. I guess my main point is: even approximately knowing what's out there helps you make much better decisions, where decisions may be anything from picking the right approach in your area of expertise, or picking up a different tool if you need to, or telling your boss to hire the right person, or even not starting on the task due to lack of right expertise.


LISP is worth learning for a different reason — the profound enlightenment experience you will have when you finally get it. That experience will make you a better programmer for the rest of your days, even if you never actually use LISP itself a lot. […]

It’s best, actually, to learn all five of Python, C/C++, Java, Perl, and LISP. Besides being the most important hacking languages, they represent very different approaches to programming, and each will educate you in valuable ways.

But be aware that you won't reach the skill level of a hacker or even merely a programmer simply by accumulating languages — you need to learn how to think about programming problems in a general way, independent of any one language. To be a real hacker, you need to get to the point where you can learn a new language in days by relating what's in the manual to what you already know. This means you should learn several very different languages.

http://www.catb.org/~esr/faqs/hacker-howto.html#skills1


I don't think it's worth studying many programming languages just for the sake of doing so. It's very rare that you get to choose the language to use on a project. If your goal is to be a great software engineer, your professional development time would likely be better spent working on domain knowledge and skills complementary to coding such as communication and leadership.


> I don't think it's worth studying many programming languages just for the sake of doing so. It's very rare that you get to choose the language to use on a project.

Studying languages help you way more than just providing you one more choice for the language of the next project. That’s precisely what this blog post is about.


Learn Prolog - everything else is an extension.

"One [language] to rule them all."


The great thing about Prolog is that it's so radically different, without being intentionally difficult or obscure.

There's not that much value in learning yet another imperative object oriented language. If you want to put the field of programming into perspective, Prolog is even more alien than Haskell.


It may depend on the person. So don't take it a simple idea. Some will find languages that fit their brain so much more they'll be very happy. But only if they can make a living with it. Going back to Java <8 after Ocaml or Prolog is a chore. Plato's cave and all that.

That said it will probably enbroaden your mind tenfold.


This. It’s not about learning language A and B because you need it but because learning language B makes you a better developer in language A.

If you have thousands of hours doing language A, you need several thousand more to meaningfully improve your competence. It’s basically just battle scars from experience at that point.

But if you spend just a few hundred hours on learning a completely different language, that gives you a completely different set of brain tools for working in your day to day language. It’s not a luxury for a js programmed to be able to spend a hundred hours on trying rust or Haskell. It’s a cheap way of becoming a better js programmer without having to spend a thousand hours more on js.


Learning Clojure opened my eyes big time. Since I have done that I can pick up other languages much more easily than before. The only problem I got is with Java, I could never grasp the different concepts of OOP but I do not actually mind that.


The author is describing the oft' maligned and perhaps misnamed Sapir-Whorf hypothesis -- or linguistic relativity if one prefers to eschew eponymous hypotheses.

That is language influences (weak version of the theory) or determines (strong) how you think. A perfectly cromulent proposition.

Ineluctably leads to arguments around the relative benefits of deep specialisation versus a T-shaped skill set, which in turn invites bureaucrats to make us all uncomfortable by injecting regrettable phrases like 'generalised specialist' into the vernacular.


Would love to see a list of sample projects for each common language that plays to its strengths.

Often if trying to pick up a new language in my spare time, I can get to a stage of learning the syntax, but if I do it by following a "101" tutorial, I'll end up building another To-do app or whatever — it's very hard from an "unknown unknown" position at the start to find the path into idiomatic "X" that will show you the unique features and paradigms of the language.


I too would really enjoy some sort of “Build X in Y” type of resource for this.


Why should anyone learn the reference manual of a man-made tool they suspect they won't use, when there are yet unlearned reference manuals for tools they will use?

Should people also read the instruction manuals for appliances they have no intent of buying?

An applicance maker should do that, to find ideas to steal. That's about it.

When it comes to learning for the sake of learning, there are more wortwhile things in the world. Learn a foreign language, for instance.


As a totally self-taught and advanced-beginner in bare bones R, Python, CSS, VBA, etc What can you recommend as the #1 thing I can do to improve my approach to programming? Not talking about syntax p,which I can easily reference when needed, but high-level paradigms and thought-patterns. Is there a favorite book, school of thought etc? Much appreciated. I know it’s a broad question.


As someone whose preferred language is Haskell, I've been told in a tech interview feedback that "I naturally and unconsciously avoid OO idioms as well as objects with internal state" even when I'm using an OO language. That's the extent I was influenced by Haskell's thinking. They didn't give me an offer.


Enaml (https://github.com/nucleic/enaml) is a declarative component-based GUI library for Python that more or less has the same driving philosophy and feel of React. There's even a (parallel project) Enaml-web.


If you are entrepreneurial and already know one functional and one strongly typed OO language, the following will significantly improve your career more than learning another programming language. Learning to:

- sell.

- design.

- write better.

- tell a good story.

- get along with people.

- do proper SEO.

- speak another natural language.

- model and query data.

- understand ML/AI.

This is coming from somebody who though himself 10+ programming languages.


Learn different kinds of languages. Cross a paradigm, dip your toes in an exotic type system.

Learning Elixir has opened my OO-biased eyes to the new and interesting world of FP. Knowing the theory isn't enough, you have to code it to grok it!


As someone in academia, I refuse to learn even the programming languages I do use


How about: learn new programming paradigms, and be very picky about the actual languages representing those paradigms that you choose to learn. Life is short, and programming languages are a dime a dozen.


I find this to be quite true. I love playing around with new languages for the insights they give me. You can sometimes use ideas across languages.

But the main reason I do it is because I think it's just good fun :)


Nice write-up. I think the most valuable advice for a programmer, "Learn a new programming language every year", I read it somewhere in the book but don't remember which one :-)


Learning new languages in-depth is an incredibly taxing thing, because there are whole jigsaw pieces of syntax you have to learn at once. But worth doing the right way if you have time.


It's been said already, but a man's got to eat. Ain't nobody got time for this.


JavaScript is one of the must learn programming languages. Server side code, client-side web app code, npm links it all together... Well it seems cliche' but also true...


It never should have escaped the browser. It's brought the suck to everything now...


It should have been replaced in the browser.


I want my idea to lookup code from stack overflow


the point is never to learn more, but to learn the essence. there's only one basic language must to be learned by every top-notched engineer is assembly. After that, it's just another layer of abstraction. The concept is just stack + pointer to memory on heap, period. Then, learn whatever language that you are currently need to use. Life is short, don't waste learning something that doesn't matters.


Choose the right tool for the job!


Off topic, but if you’re going to justify body text, at least turn on hyphenation.


Learn Prolog.


Or Erlang, which is historically Prolog. They still share some similarities.


I’m getting tired of this. Yes I see the benefit of knowing multiple languages but let’s look at the other side of that coin. There’s the dev that is familiar with a dozen languages and writes terrible code in all of them.

When exactly are we supposed to be learning all these languages? Nights and weekends that effectively has you working 24/7? Or more likely some developer that has a personal relationship with the CEO and can do no wrong reads a blog post on language X and catches the fever. Well, he needs some practice and he’s not working nights and weekends so now we are rewriting things or working in a mixed environment. He knows just a smidge more than the rest of the team so he’s in charge but can’t be seen as anything less than an expert so they pretend they are. He leans on dogma and best practices as he bullies the team as they screw things up left and right.

Anyone who is stupid enough to say that we should have stayed with what they had been using will be labeled as not a team player and blamed for the failures.

Those failures will be used as proof that our hero is as amazing as he says he is as he becomes a 10x programmer by kneecapping the rest of the team and valiantly struggles to pull and incompetent team forward.

Next thing that happens is the good developers leave and the rest are forced out or fired assuring our hero developer a fresh crop of devs who don’t know the history of the project cementing the myth that our developer is a genius in our new language.


> When exactly are we supposed to be learning all these languages? Nights and weekends that effectively has you working 24/7?

By that logic, you might ask how we have time to do anything? How do we find time to eat? How do we find time for HN?

Learning a programming language doesn't have to mean mastering the language. Make a game out of it. Pick one "thing" about the language and spend 15 minutes on reading / practice on that thing. Make an Anki card (or multiple cards) on that thing and run through the deck for some other 15 period "learning" break. Maybe this doesn't work for you either, but it's something else you can try other than chaining yourself to your computer until you come out a master.


Perhaps you could eat using your left hand, and use your right hand to make Anki cards. Then you could have lunch and learn a new programming language in 15 minutes.

Meanwhile, you can use one of your legs to relax, and the other one to interact with your family (unless having a family is too unprofessional).


IMHO the whole point of learning new programming languages is exactly not to rewrite everything to the latest hipster language on a whim.

Instead this adds new tools to your toolbox, so next time you can choose a tool that's a better match for a problem instead of trying to bend a language that wasn't created to solve such problems.

IMHO having many small specialized "speedboat languages" around which can be learned in a few days to weeks is much better than having a handful huge "oil tanker languages" which eventually become overly complex "jack of all trades, master of none" boondoggles.


In the contest of a distributed application I definitely agree with this. When constrained to a monolithic application architecture, small focused languages can be problematic. As monoliths grow and become more complex, domain shifts and requirement changes can take you out of that small language sweet spot.


>When exactly are we supposed to be learning all these languages?

Some argue that as a professional, you should be spending time outside of work learning to improve your career. You can't expect your employer to give you time to learn.

I find it can be difficult though, especially when you want to spend your time outside of work on other things like family and hobbies.


I didn’t say you shouldn’t spend personal time on those things but you should expect to be compensated one way or another for it. It’s interesting how people in IT separate their skills from their compensation. If my lawyer or doctor is working weekends I guarantee one way or another they are making damn sure they’re getting compensated for it.


Most states require between 20 and 50 hours of continuing education for MDs per year. (https://www.boardvitals.com/blog/cme-requirements-by-state/)

Most states seem to require between 8 and 15 hours of CLE per year. (https://www.lawline.com/cle-requirements)

You'll find that with most actual "professions".


My compensation for learning things "on the side" was always simply getting the better job where you can use the skills you got.

I've taught myself web development and thus got my first job. I've taught myself iOS development and was able to switch to it full time later.


That is probably because they are both different jobs. Both doctors and programmers get paid while they're at conferences and shouldn't have to pay for travel, lodging, food, or conference fees. Outside of that, I'm not sure doctors are needing to pick up too many skills outside of what they typically get paid to do.

A rails developer gets paid to make rails apps and maintain them. If you notice that your company will have to pivot soon and another technology will be needed, I hope they allow you to learn on the job, but many are far too shortsighted, so you have to do so outside of work and it's the reality of the situation however stupid.


I certainly know teachers, doctors and lawyers that improve themselves out of their own pocket and time.

Unfortunately not everything gets done, or paid for, during work time.


I absolutely expect my employer to give me paid time to learn, and only work for those that do.


I don't. Oh, I'll spend some down time learning rather than spending all of it here, but I don't expect formal training, especially if I am asked to jump in to some project quickly. In fact, most formal training I have been exposed to outside of school has been horrible and useless; I actively avoid them.


I expect employers to give me paid time to learn stuff they require me to use at work, but if it is something unrelated to the company I am not expecting them to give that time.


But this is also to do that many people do not see developing software as a career; more something they happen to be ok at and it makes a bucket load of money. If it's a career, like I decided to make of it when I was 16, you can be an actual senior in a number of languages. I feel equally secure writing large codebases/projects in quite a substantial number of languages after 30 years of learning different languages and environments and having and worked on many large and different projects using different languages.

It is quite different for someone who is 25 and does not get enthusiastic about spending a lot more years in the field; why would you then learn a lot of different languages? It makes more money, short term(!), being good at one language just jumping up the ladder as fast as you can using that.


[flagged]


I didn't say I started then, I said I decided I liked it enough to make it my career; others in my university saw it as a stepping stone to (upper) management. I feel the age that I decided that can be interesting for others thinking about career choices.


Jesus Christ, man, I think that team was going to fail anyway. That's super dysfunctional and the languages thing seems like a very small deal.


Are you under the impression that IT isn’t super disfunctional right now? Yes, the language thing is a symptom not the problem.


Dysfunctions are on the team level. I know some exceptionally well-functioning teams.

The field of software engineering as a whole? Nope, not dysfunctional. Creating the most value in the world since the internal combustion engine and doing so super smoothly.


There is code that makes the customer happy and willing to pay for more projects, and perfect code that never sees the daylight.


I can tell you've worked for at least one startup. Par for the course.


Also, it's boring.


I've learned to code methods in OO languages without using loops after I've learned Erlang. No loops means less bugs.


I can’t tell if this is sarcasm but if it isn’t it just proves the point. Your teammates must just love working with you. I’d love to see the look on their faces when you committed that code. “I just learned Erlang. Check out what I did without telling anyone. Suck it. Boom!”


It is not sarcasm. Yes, they love working with me, they learn new things from me and they use it. Generally their face is like "o_O I wish I've learned it before". I use build-in functions of PHP and Java in way without loops, there is nothing unknown. I don't write Erlang code. Here is an example what I mean;

  <?php declare(strict_types=1);
  function fibonacci_nth(int $n): int {
      if($n === 0) return 0;
      if($n === 1) return 1;
      else return fibonacci_nth($n - 2) + fibonacci_nth($n - 1);
  }
  function fibonacci_series(int $n): array {
      return array_map('fibonacci_nth', range(0, $n));
  }

And here in Python

  class Fibonacci:
     def fibonacci_nth(self, n):
        if(n == 0): return 0
        if(n == 1): return 1
        else: return self.fibonacci_nth(n-1) + self.fibonacci_nth(n-2)
        
     def fibonacci_series(self, n):
        return [self.fibonacci_nth(x) for x in range(n+1)]


"no loops means less bugs"

But the loops will be clear and explicit. People can check if you have an off-by-one error.

Python does not support many recursive calls, after "n" gets beyond a certain number, the code will crash with a RecursionError.

That is a bug which is much harder to see. The code will crash if N < 0 for the n_th() method, which is probably a bug.


And none of those OO langs can be relied on to have tail recursion so now you a new problem


Most non-functional languages will implement map/filter/reduce as loops under the hood, but realizing that there is an abstraction for general sequence processing frees you up to focus more on the core business logic at hand and reduces the surface area of your code for bugs.

Sure, handrolling a loop is a trivial activity that novice programmers can figure out, so you should _never_ get it wrong because you _never_ ever program tired or hurriedly under deadlines or make typos or input off by ones...


I don't recall exactly how many handrolled loops I've messed up in the last 10 years, but the number is either 1 or 2. Could I mess up one or two map/filter/reduces in 10 years? Is it impossible to mess them up? What about the cases they don't cover, so you use your own recursion?


OK you can write a handrolled loop almost perfectly. /Why/ are you writing ten years worth of loops - something a computer can do for you?


Can it? I'm working on embedded systems, in C++, with C++11-or-earlier compilers. Sometimes I need control of the exact order things happen in. Sometimes I need an action to occur either zero or one times, on the first available entry. Always I need to write code that my coworkers can read.

How much of that can a C++11 compiler do for me?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: