As a counterpoint, this "being interoperable with everything" mindset is predominant. Everyone has an interop story, some languages are all about interop (Lua, before Python).
The worse case scenario shows up in Verner Vinge's scifi as the programmer archeologist (see https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky), that all the code that could have been written has been written, and programmers of the future will be more like archeologists sniffing it out.
In that future, there is no room for green fields, which I find to be very depressing.
Shouldn't line of business software naturally trend from being a creative pursuit to a standard engineering or trade, where defined systems are applied to a semi-novel problem using known patterns and rules? Rediscovering the whole thing every time is certainly inefficient.
I think there will always be room for green fields, though. Even in very old fields scientists and inventors still discover new approaches and solve new problems on a daily basis. Yes, this will move to the fringes, but is that a bad thing? I'd rather spend 10% of my time on research and reimplementation and 90% on a Fun New Problem/Approach rather than 90% of my time reinventing the wheel and 10% of my time on new stuff.
I'm pretty sure ML eventually eats the programming cake, though it might not be in our lifetimes. Definitely by the time we have developed FTL travel.
We have reached a very pronounced local maxima with our current programming practices and ecosystems. Reinvention can get us out of that, though is more likely to fail than not.
Strongly agreed. I've had a fairly long career and worked in dozens of languages. Languages like Crystal (staticly typed, but with strong type inference that makes them feel like a dynamic language) feel like they're from 10 years in the future. Using languages from the ML-family (OCaml, F#, Haskell, Purescript, Elm, Idris, Agda) makes me feel like I've gone 25 years into the future and been set down in front of a computer. Especially watching Idris write my program for me just based on the type signatures. That blows my mind every time it works (which it often does).
Well, you can take it too far for sure, but I don't think we're even close. I disagree with most of what you wrote:
- There will still be green fields because there are new problems to solve. New languages (and new platforms) are justified for new problems. For example, you will have languages for machine learning, for quantum computing, for computing with security, etc. Microsoft has an interesting P state machine language which adds something new.
I would love to free up some of my brain real estate learn those new domains / languages, rather than doing the same thing over and over again in different languages.
Although as I understand Eve was a Datalog, and I think that adds something "new". Although making that Datalog interoperate may have been more successful than building a cohesive platform around it (but perhaps contrary to their goals).
- "the being interoperable mindset if predominant". I don't agree, e.g. every language has its own package ecosystem. They interoperate (poorly) with C, but not any other language really. The JVM ecosystem is more or less completely distinct than the Go one, although they cover a similar problem space. Likewise for say Rust and Go.
- Python's interop story is not great; it is littered with implementation details, There are 10+ wrappers like SWIG, CFFI, ctypes, CLIF, etc. on top of the Python/C API because it's so prickly. Alternative implementations like PyPy don't get much adoption; there is version lock-in with Python 2 vs. 3. Python 2 vs. 3 is a great example of where beginners get stuck in mind-numbing detail and complexity.
- Lua was designed with good C interop, but not that's about it too. If anything, it's the exception that proves the rule. It's not a very popular language either, at least 10x less popular than Python, Ruby, JS, etc.
So if you agree that much of programming is doing the same thing over and over again in different languages, then it follows that adding new languages doesn't help that. It might help beginners, but it doesn't make things better for working programmers, who write most of the code in the world.
I understand that for beginners, you want a coherent experience. Beginners can't learn two languages at once -- that's a very good argument. But I do think there is an economic problem with trying to make a single "cohesive platform" to solve all those problems. Beginners can imagine something that is logically 10 or 20 lines, that will take you 100K lines, 1M, or 10M lines of code to implement on the back end.
I think that is the problem Eve ran into -- it's simply too much work to try to interpret extremely high level programs at the level of the user's intention.
For example I just wrote a 20-line piece of JavaScript on Google App Scripts, embedded in a spreadsheet, that sends e-mails. It was a surprisingly good experience -- I had low expectations. But I bet there's at least 1M lines of code in the background to make this all work (probably more, having worked at Google), and there's a whole team of SREs, etc. Some of the code in there is 10-15 years old, etc. It's a huge job.
In other words, I think Eve was a full-fledged platform, and platforms have problems getting off the ground without a billion dollars. Even if your platform is backed by a big company, it might be hard to gain traction. Google App Engine provided a lot of value and a nice experience, enough that Snapchat was built on it, but I think it faltered in the limited availability of runtimes (it couldn't run PHP for a long time, etc.) In other words it didn't interoperate with the rest of the world enough, and I imagine Eve interoperated even less.
I'm curious what you folks think of things like Racket and it's ability to create DSLs for various problem spaces. This seems like it would let you stay in one language yet have the flexibility to still create the right tool for specific jobs.
The same goes with the abilities of its macros too. Does that give you the ability to define what you want and have the language create the code for you?
Caveat, I'm new to programming and just now learning Racket so it has me all excited but I barely know what I'm talking about. This thread has been very interesting to read so I'm curious what others think of languages which have the flexibility to adapt. Maybe it creates a maintenance or understanding other people's code nightmare or something. I'm just curious why Racket's approach or something like it hasn't taken off.
I don't have much experience with Racket, but I've read a lot of the papers. I think they're exploring exactly the right problems.
I'm not convinced Lisp is the best substrate for it. It makes more choices than you think (number representations, etc.) -- it's not completely neutral.
S-expressions turn out not to be the lowest common denominator -- strings are! "Everything is a file" in Unix means that everything is a big lump of bytes -- e.g. as opposed to the early Mac file systems which had metadata too.
The web is very much an extension of the Unix philosophy -- composing languages, ad hoc string manipulation, etc. with HTML/JS/CSS and dozens of other DSLs. Of course, this architecture has a lot of downsides, and that's what I'm working on in Oil.
-----
Other anecdotes:
There's a research shell called "Shill" built on top of Racket. I heard that they moved off of it for some reasons related to Racket's runtime.
Also, I heard that Racket is thinking of replacing their runtime with Chez Scheme (which is one of the more impressive Scheme runtimes as I understand, being in industrial use for 20+ years.)
So I think that Racket is good within its own world, its own runtime, but weaker when it has to interoperate with the rest of the world. Unix and the web are the rest of the world... and they don't speak s-expressions, so s-expressions really offer no advantage in that case.
-----
If you think about it, Unix already has the same architecture as Racket. It's a bunch of DSLs (shell, make, C, C preprocessor, linker scripts, etc.) on the same runtime (the kernel, which talks to the hardware). It's just that you are forced to parse everywhere and serialize. That is definitely annoying, and problematic. But empirically, that design seems to be more evolvable / viral / robust.
Anything you can do with macros, you can also do with a traditional lexer/parser/compiler. Macros make things easier, but conceptually the architecture is the same.
Parsing is difficult for sure, but I do think that most programmers have an unnecessary aversion to it. I hope to tackle this problem a bit with Oil, i.e. make it easier to write correct, fast, and secure parsers.
Anyway that's my take, hope it helped!
EDIT: I should also say to not take this as discouragement from learning Racket. My first college class was SICP and it had a huge and permanent effect on my thinking. (I learned from reading Racket papers that my TA went on to work on Racket itself.)
I just didn't use Lisp for any industrial job thereafter. But that doesn't mean the experience wasn't valuable. I would definitely learn it for the ideas.
The worse case scenario shows up in Verner Vinge's scifi as the programmer archeologist (see https://en.wikipedia.org/wiki/A_Deepness_in_the_Sky), that all the code that could have been written has been written, and programmers of the future will be more like archeologists sniffing it out.
In that future, there is no room for green fields, which I find to be very depressing.