This feels like the same pattern as Dark leaving OCaml for F#: https://blog.darklang.com/leaving-ocaml//. Ecosystem matters a lot these days. Outside of these two specific cases, I wonder if we're, as an industry, too afraid of writing this kind of stuff now I feel like it was done a lot before, and not at all these days. Sure, NIH syndrome is a fallacy, but having to write one library may not be so bad. I would be glad to hear about any experience with that.
That is why when comparing languages we should always look beyond grammar and semantics.
This is nothing new, it is also a reason why languages like C and C++ won the systems programming wars from the 1990's.
After a while one gets tired to write wrapper libraries, or having to pay extra for an additional compiler that isn't part of the platform SDKs.
Hence why successful languages always need some kind of killer feature, or a company with deep enough pockets willing to push it into the mainstream no matter what.
Same applies to new OS architecture ideas as well.
Why wouldn’t Zig be applicable anywhere where c is applicable? Afaik it can also compile to C as a target other than the many architectures it supports.
That's not released or finished, last I checked a few weeks ago.
Edit: Oh, looks like it was released just a few days after I last checked; ha. Although, it's not clear to me whether it's intended for end-user use yet.
Nim is elegant, relatively safe, and not interpreted; its performance is within a stone's throw of C, and roughly on par with Zig; but with better safety guarantees.
As I've said somewhere else, don't think in terms of languages but in terms of use cases. Nim can make a lot of sense for some cases where C is used, and not much for others. The same is true for Zig. There is no and there will never be a "C replacement", just other options depending on what you are doing.
My two cents as a non-professional programmer: i've found hacking on someone else's codebase to be very hard with/without a debugger in dynamic languages like JS/Python where most things are untyped and you get runtime exceptions upon eg. trying to call a method on a nil object.
BUT back on the thread's topic, since i started programming in Rust, the only time i've felt it was hard to wrap my head around the compiler's output was with complex async programming. Otherwise, every single time i felt like a simple text editor was more than enough with rustc's output to understand what's going on, because in Rust everything is very explicit and statically typed with clear lifetimes, and the compiler has very helpful error messages often including suggestions on how to fix the error.
For me, everything (non-async) Rust is a clear win in terms of understandability/hackability, compared to all other languages i've tried before (admittedly not a lot of them). I think complex IDE tooling can ease the pain, but proper language design can help prevent the disease in the first place.
EDIT: i should add, since i started programming in Rust, i've only once or twice seen a runtime panic (due to array indexing error on my side). Otherwise, literally all of the bugs i had written were caught by the compiler before the program ran. For me it was a huge win to spend one more hour to please the compiler in exchange for spending days less debugging stuff afterwards.
While I use scripting languages when needed my main languages were always compiled with static typing. And I did not need debugger for hacking code. I need debugger mostly for tracing my own code when I have some bugs mostly related to algorithmic errors, not with the program blowing up on me.
I did not program in Rust so I can not really judge the language but I doubt that it is so nice and expressive comparatively to modern C++ that suddenly the types of bugs I am hunting will magically disappear.
> I need debugger mostly for tracing my own code when I have some bugs mostly related to algorithmic errors, not with the program blowing up on me.
Then you're a much better programmer than i am! :)
For algo debugging i just use pen and paper. For more surprising results, print statements are usually all i need.
> suddenly the types of bugs I am hunting will magically disappear
Maybe not, but i'd recommend to give it a try, if only to offer a different perspective. For me personally, strict and expressive enums, mandatory error handling and Option/Result types as language core features (among others) have definitely eliminated most bugs i write. Well i still write those bugs, but at least the compiler doesn't compile like everything is fine, and instead lets me know why my program is flawed.
Oh, you mean target architectures. But stuff like SuperH is used only for embedded these days, where even C is often rather idiosyncratic. For most coders, Zig is comparable to other mainstream languages in terms of supporting mainstream platforms.
Anyway, this is really a matter of implementation, not a language issue. There's nothing about Zig that makes it inherently impossible to support SH2 or any other platform - indeed, as others have noted, they already have a C backend in the works, so the endgame is to support everything that has a C compiler.
Also, as far as C interop goes, if I remember correctly, Nim can't just take a C header and expose everything declared in it - you still have to manually redeclare the functions and types that you want in Nim, no? You can use c2nim, of course, but that's not really any different than generators for other languages, and requires extra build steps etc. Zig handles it all transparently.
> Anyway, this is really a matter of implementation, not a language issue.
I think separating the two is a bit artificial. Python being slow is partially an implementation issue but the fast implementations can't run everything. When you compare languages, you have to compare implementations, otherwise it's meaningless.
You have to compare ecosystems, but when doing so, you still have to compare PL design and PL implementation separately, because they have different implications. A quality-of-implementation issue means that something can be done, but isn't done by this particular implementation. A language limitation applies to all implementations.
That's not really true, you can go around language limitations. Go has codegen for generics, JavaScript has TS for static types, Babel for """macros""". Lots of propositions that are not in JS now can be used with Babel. Python has C extensions.
TypeScript is a different language from JavaScript, and C is a different language from Python, so I don't think those are good examples. Similarly, various macro languages that sit on top of something else are also languages in their own right.
And sure, you can always "fix" a language by designing a derivative higher-level language that transpiles into the old one. In fact, this is a time-honored tradition - C++ was originally just such a transpiler (to C). But the very fact that you have to do this points at the original design deficiencies.
> Anyway, this is really a matter of implementation, not a language issue.
That's kind of what people are getting at in this whole conversation though isn't it, ecosystems around languages matter. They can't be an afterthought.
Of course - which is exactly why Zig is not ignoring this. But we still have to compare apples to apples, and oranges to oranges. My original comment was about languages - specifically the ability of the language to consume libraries from another language with minimal hassle, and the response was that Nim somehow does it better.
I'm not even sure why arch support was brought up in this thread, to be honest, because it's not relevant at all? If your problem is unsupported architecture, it's a blocker long before you need to use any libraries...
> Hence why successful languages always need some kind of killer feature, or a company with deep enough pockets willing to push it into the mainstream no matter what.
There's a third strategy: hitching your wagon to an already-successful ecosystem, like languages such as Kotlin do.
That strategy always falls apart when the ecosystem goes into a direction that the guest languages did not forsaw, or already created incompatible concepts, and then get the dilema of what to expose from the underlying ecosystem.
Using Kotlin as example, its initial selling point was Java compatibility, now it wants to be everywhere, and its Java compatibility story is also constrained for what Android Java is capable of.
So the tooling attrition increases, with KMM, wrapping FFI stuff to be called from Java like coroutines, and everything that is coming with Loom, Panama and Valhala.
I used to work at an OCaml company and it wasn't nearly as much of an issue as one might predict. You can (it turns out) build a very successful business even if there aren't a lot of existing libraries, or if the language lacks certain basic features like native multithreading (same with Python of course). I don't have a great model for why this isn't devastatingly expensive, but it's probably some combination of
* Most existing libraries are kind of bad anyway so you're not missing out much by not using them
* If you write everything yourself you get system expertise "for free", and gaining expertise in something that already exists is hard
* You can tailor everything to your business requirements
* Writing libraries is "fun work" so it's easier to be productive while doing it
I think Jane Street is a big exception. It's like when PG espouses lisp. Back in the 90's[1] language ecosystems were very sparse. An ecosystem was a few big libraries and a syntax highlighter. Now stuff like IDEs, linting, packages, etc. have made people's standards quite high for ecosystems. On the flip side, back in the day languages like OCaml and Lisp had stuff other languages could only dream about. Functions as arguments! Macros! Type inference! But now, barring macros, these features are all in mainstream languages.
If you were to do a similar company now, you'd have to recruit people who still write code like in the 90's: emacs/vim hackers who can write their own libraries and don't need no stinking IDE. Except you now have a significantly smaller advantage because a lot of the languages have caught up and while your god tier programmers can write their own custom serialization library, that's still more developer time than using serde.
Which is why a lot of people are moving to Rust I suppose. You still get the hip factor but responsibly. It's the Foo Fighters of languages. Cool, but not irresponsible.
The big difference between Rust and OCaml is that a company the size of Jane Street can influence OCaml development, while it takes one the size of Amazon (according to the recent accusations) to do the same with Rust. I think OCaml has one of those "ancient" communities that seem to value independence more than consensus. Rust is very hard to build without cargo, OCaml works fine with make or dune. I'm not sure if focusing on independence is the right tradeoff for most companies, but I can see some cases where it might be.
> If you were to do a similar company now, you'd have to recruit people who still write code like in the 90's: emacs/vim hackers who can write their own libraries and don't need no stinking IDE.
IDE support is getting there with OCaml. In VSCode, it's not as good as TypeScript but it's usable.
> Except you now have a significantly smaller advantage because a lot of the languages have caught up and while your god tier programmers can write their own custom serialization library, that's still more developer time than using serde.
There are a few libraries that you can use. Serde also tends to make the already long compilation time blow up.
I was writing code in the 90's, and my first IDE was Turbo Basic in 1990 precisely, followed by Turbo Pascal alongside Turbo Vision and Object Windows Library.
Eventually I also got into Turbo C, Turbo C++, and then upgraded myself into Borland C++, used Visual Basic 3.0 in 1994, and a couple of years later Visual C++ 5.0 was my first MS C++ IDE.
Mac OS MPW was an IDE and stuff like AMOS and DevPac were IDEs as well.
Java IDEs started to show up around 1998, like Visual Cafe and the first Eclipse, after being ported from IBM's Visual Age.
Visual Age, which were the IDEs for Smalltalk and C++ from IBM for OS/2 and Aix.
The only group that wasn't using IDEs were the UNIX folks, thankfully XEmacs was around to bring some back sanity when I had to code in UNIX.
I'm curious about these early IDEs. My knowledge of 90's programming is solely from secondary sources. What features did they have? Did they do stuff like automatic renaming or goto definition? Were those features done syntactically or semantically? How fast were they? A common complaint I've read is that people could type faster than an IDE could keep up, which is something I rarely encounter these days.
Clojure has access to the Java library ecosystem and works beautifully in IntelliJ. That may be one of the best ratios of language properties to tooling quality.
These days, you don’t need to build an IDE from scratch - you can just build some language server support for your language and plug into existing IDEs. It’s much less work!
Also, as an aside that’s not really germane to the argument, it’s possible (and IMO preferable) to write code without using an IDE. It forces you to write code that’s broken up into contexts small enough to fit in human working memory. This pays off in the long run. However, once people in your company start writing code with an IDE, it requires more context and becomes almost impossible to edit without an IDE.
Haskell is another language besides OCaml that doesn’t have a ton of MEGACORP backing but nonetheless forms the basis for several very successful companies and groups within MEGACORPs, and where many developers prefer the experience of using it despite not having a $10M IDE like you would for Java. And speaking of that, all the ludicrously expensive and complicated IDEs mostly suck anyway!
> you'd have to recruit people who still write code like in the 90's: emacs/vim hackers who can write their own libraries and don't need no stinking IDE
I wasn’t writing code in the 90s, but I’ve worked at places like this and I would take it any day over “people who copy/paste from stack overflow and get lost without autocomplete” - unless the novel alternative you have in mind is something better than that?
> Which is why a lot of people are moving to Rust I suppose. You still get the hip factor but responsibly
Rust does seem to be in the schelling point of “better enough than C++ to get us off our asses, but no so much better as to scare the devs”. Not sure I’d say it’s especially “responsible” though.
Language servers are certainly a big improvement. However there's a different between "there exists" and "this is a community priority". In some language communities the developers use IDEs, they like IDEs and they make IDEs a priority. In other communities there's one or two people who like them, kinda, and maintain a plugin. Let's put it this way, I don't see OCaml moving to a query based compiler anytime soon.
I'm not sure I agree with the no-IDE part. It feels very "Plato complaining about the invention of writing". Human working memory is quite narrow and quite fickle. If you step away from a codebase for a while, or you're learning it for the first time, an IDE can really help with your bearings. I agree that code should be broken up into contexts and well organized, but I don't think the editor should be the forcing function here.
And IDEs are great! Goto definition that works even in complicated situations unlike ctags; inline type inference; generating imports. I don't begrudge someone using emacs or vim (2nd generation emacs user) but I gotta say, IDEs work wonders.
As for who I'd recruit, I think it's a false dichotomy to say that the alternative is "people who copy/paste from stack overflow and get lost without autocomplete". There's plenty of great, legit developers who can write you a custom serialization library in nano, but choose to use IntelliJ and packages because it gets the job done.
I don't mean to denigrate the OCaml, Haskell or Lisp communities. I wish more people wrote them! But I also recognize that these languages went from secret weapons to, well, a valid option in a trade off matrix. I'd still love to work at Jane Street, although between this comment and my application record, that may be a pipe dream.
> Also, as an aside that’s not really germane to the argument, it’s possible (and IMO preferable) to write code without using an IDE. It forces you to write code that’s broken up into contexts small enough to fit in human working memory
No, complex programs by definition don’t fit into human working memory. Even with best practices, FP, whatever, function composition alone can’t always elevate the complexity to the requirement’s level — so in the end you will end up with larger chunks of code for which you will have to use code navigation features - for which I personally prefer an IDE, but that is subjective.
> No, complex programs by definition don’t fit into human working memory.
If you write your code in the right way they don’t have to. That’s the point.
You shouldn’t need to comprehend your entire program at once to work with it.
> Even with best practices, FP, whatever, function composition alone can’t always elevate the complexity to the requirement’s level
Function composition isn’t the pinnacle of abstraction. We have many other abstraction techniques (like rich types, lawful typeclasses, parametric programming, etc.) which allow for any given subcomponent of a program to be conceptualized in its entirety.
> If you write your code in the right way they don’t have to. That’s the point.
I recommend you try working through some equality proofs in Coq, first with and then without coqtop / Proof General. I think you may change your mind about this rather rapidly. And many proofs get much more complex than that.
I’ve used (and developed) plenty of proof assistants. Proofs are one very narrow domain where automation is basically a no-brainer. You don’t really lose out from the proof having high semantic arity. With normal code, you do lose out.
I dunno. I think the line between "normal code" and proofs is a lot blurrier than many people make it out to be, and I don't think there's a huge advantage to asking people to run a type checker or inference algorithm in their heads even if you have managed to encode all your program's invariants in the type system (which is impossible or impractical in many languages). I say that as someone who doesn't use an IDE except when I'm forced to: I know I lose out on a lot of productivity because of it, and I don't find that removing the IDE forces me to develop better or more concise code.
I can work without autocomplete, but I find my productivity is about 20% when I have to go back and forth with the compiler on syntax errors and symbol names, like back in college. Working professionally with an IDE that just doesn't happen.
PG seems to be the only person who has built a successful business using Lisp. While thousands of successful companies are using C++/Java/.. etc. why do you think so few companies have succeeded with Lisp?
While it's true that there are lots of bad libraries and many libraries are easy, you really isolate yourself from the broader ecosystem by doing this. Vendors and 3rd party solutions are now much harder to use, and when you use them you'll probably only use them at a surface level instead of using all the features.
And some things are so mature and vast you don't have a chance of building them yourself. If what you are doing can be done well in very mature ecosystems like React or PyTorch, the effort to recreate them will dwarf the time spend on important work.
> If what you are doing can be done well in very mature ecosystems like React or PyTorch, the effort to recreate them will dwarf the time spend on important work.
Sometimes that's effort that's not really necessary. At work we had a team build a dashboard with React and a few graphs recently. It clocks at 2000 unique dependencies. That's not a typo, it's two thousands dependencies for a few graphs. Reimplementing all of that would take many man-years of work, but I think it wouldn't be necessary in the first place. Chart.js doesn't use any runtime dependencies, and could probably fill most of our needs. Chart.js is 50k of Javascript, which is a lot but probably more than we need. I don't know how much time it would take to make a reimplementation to have the API we need, but I think it's doable. Why would we want to do that? Because those 2000 dependencies are a liability. Last time I checked 165 were looking for funding. It would be easy to imagine a malicious actor paying a few people to contribute to low profile but widely used JS libraries, and take over the maintenance when the original maintainer becomes tired. I don't know if this is a worse liability than developing our own chart library. I don't know much about security and the tradeoff involved.
All of that to say, isolation from the broader ecosystem may be a good thing.
Yes, it's not free in the sense that this is paid developer time, and also a delay before actual production deployment.
As long as learning the basics of a 3rd-party library takes a relatively short time, those who use it have an advantage: they ship faster. Certainly mastering it may take as long as writing one's own. But you can do that while writing more production code and shipping features. Also, you get improvements made by other people for free (because likely you're using an OSS library anyway).
That's interesting, thanks for sharing your experience. Sometimes I wonder how much interesting experience I miss by mostly using things people already built. Sure, if you're just trying to get something done it's probably faster, but on the other hand spending more time gives you experience.
My experience as well. I worked in industries (Games development) where no open-source or proprietary solutions existed for what we needed. So we built it ourselves. It wasn’t that hard and we never had problems with it because it did 100% exactly what we needed and nothing else. If any new feature was needed we simply added it. I would routinely get an ask from a programmer, artist, musician or game designer and would have it implemented and a new version ready for them within a day or two. The productivity gains were immense.
Multi-threading can often be handed off to the OS in the form of just run more processes. So in most cases there is really no need for the language to handle it.
Its still nice to have shared memory especially in a functional language where due to lots more immutability it isn't as big of a price (i.e. more concurrency safe). Especially if your sharing in-memory caches and the like.
I've seen this save tons of dollars in cloud setups (100,000's) in my career in more than one place. Safe multi-threading was marketed as part of the advantage of functional programming to many people; which IMO why I find it strange that OcAML didn't have it for so long. Lightweight threads (e.g. Tasks, Channels, etc) use less system resources than processes as well.
You can do anything with anything but you usually pay some price of inefficiency to do so. That may be small, but sometimes it is large enough to matter.
Agreed; especially since it didn't have it originally. I'm sure some compromises were made to do it in a way that fits into the execution model/doesn't cause regressions to older code. They are both good languages for sure which makes sense because one is derived from the other.
F# through itself and .NET has had the equivalent functionality for many years however (Threads, Tasks, Async's, Channel libraries, etc) being the one of the first languages (before C#) to have an Async construct. I would imagine it would take some time I think for OcAML's ecosystem to catch up API wise with this where it matters for application development. Looking at OcAML's recent release notes I see work to slowly migrate multicore into OcAML but the feature itself hasn't landed yet? Think it comes in 5.0.
I have to say though F# performance is quite good too those days from working/dabbling with both languages + the ecosystem is bigger. I would like to see/understand where the OcAML perf advantage is there is any for evaluation. The CLR has really good performance these days; its has features to optimise some things (e.g. value types make a big difference to some data structures, hot loops vs the JVM) from my experience especially if you don't allow Unsafe code for many algo's/data structures. For example I just looked at the benchmarks game (I know has its problems including bad contributed implementations for niche ecosystems) but it shows .NET Core (and therefore F#) performance is within the same scale as OcAML at times beating it (https://benchmarksgame-team.pages.debian.net/benchmarksgame/...).
If you don't need to share any data between processes, sure. Otherwise, you will find yourself forced into a pattern that is:
* extremely costly
* hard to make portable cross-platform
* hard to make secure (since data external to the process can't be trusted like shared-memory data can).
* requires constantly validating, serializing, and deserializing data, wasting developer time on something irrelevant to their problem domain
* adds a bunch of new failure modes due to the aforementioned plus having to worry about independent processes crashing
* fails to integrate into the type system of your language (relevant for something like Rust that can track things like uniqueness / immutability for you)
* cannot handle proper sharing of non-memory resources, except on a finicky case-by-case basis that is even more limited and hard to make cross-platform than the original communication (so now you need to stream events to specific "main" processes etc., which is a giant headache).
* Can cause significant additional performance problems due to things like extra context switching, scheduler stupidity, etc.
* Can result in huge memory bloat due to needing redundant copies of data structures and resources, even if they're not actually supposed to be different per thread.
Make no mistake here, when you're not just running multiple copies of something that don't communicate with each other, "just use processes" is a massive pain in the ass and performance and productivity loss for the developers using it, as well as hurting user experience with things like memory bloat. The only reason to go multiprocess when you have threads is in order to create a security boundary for defense in depth, like the Chromium does--and even they are starting to reach the limits of how far you can push the multiprocess model before things become unusably slow / unworkable for the developers.
There isn't really a need for anything, which is kind of the point. You can use a random language that's missing a ton of functionality and you can probably make it work.
Kind of, but like OCaml-to-F# is like "Dark goes from the Prius of languages nobody uses, to the Tesla of the .NET ecosystem." The aims (ecological in the car case) of the purchaser are similar, but the car [at least in its ecosystem] is sleeker and now the roads are a bit different.
On the other hand this is like "Wallaroo moves from the Cadillac of languages nobody uses, to the Chevy Equinox of languages nobody uses." Like, totally fine, you grew older and had kids and needed an SUV to keep up with home life, no shame in that... but there is a wistful "ah when we were young" to the transition, no?
I know a little about cars and I'm still confused - I think I just have a very different view of the relative "it factors" between each pair of languages. De gustibus...
Yeah I mean that's fair too. My impressions are poorly formed but the analogy is that both F# and OCaml are based on functional programming, which to my mind takes a step back from the "imperative programming -> OOP -> shared state multithreading" history into an alternative history, I'm phrasing this as them being "electric cars" for the first half. OCaml is not really the swankiest of swank in the alt-languages community, so I chose a Prius to be like "what's a car that folks know as eco-friendly but it's not very prestigious?" ... meanwhile F# is like "we are THE functional programming of the .NET world, come over here we're cool and slick and all-electric" hence the Tesla comparison.
Pony is like the far-off "ah maybe someday I'll be able to use that at work, maybe for some little experiment" thing and it reminded me of going to a dealer and being like "let me drive the Caddy, you know I'm not gonna buy it, I know I'm not gonna buy it, but I just wanted to live a little today." I don't have any particular experience with Chevy SUVs so I just chose one at random, the point was that Rust is like a "look we're just trying to be C with explicit contracts between parts that allow for memory safety" type of language, very practical and chunky and like people love it don't get me wrong... just, it's an SUV. It's less opinionated and more just "let's get from point A to point B safely."
I think it's one of those cases where using metaphors doesn't help clarify the thought, and instead obscure it. Rust shares a lot with OCaml, and so with F#. F# is "the" functional programming language of the .NET world, but it's also because it's the only one, and it's a second class citizen.
I will also add that Rust is not trying to be C (and neither trying to "replace C"). It's here to offer an alternative, that in some cases makes more sense than sticking with C. C code means a lot of thing. For example, some people code in C89 because they find some kind of purity in it. You're never going to get that from Rust. For some people, it means fast and secure code, like with Python's cryptography. That's a place where Rust can be used. For some other people, it's C because that's the only thing that's allowed by some authority. Again, Rust isn't going to fit here until/if it's allowed. I think in general, trying to reason in terms of use case leads to better comprehension than trying to think in languages.
But outside of that, the move was basically the same. They found another language that's very similar, but that has a way bigger ecosystem.
> Rust shares a lot with OCaml, and so with F#. F# is "the" functional programming language of the .NET world, but it's also because it's the only one, and it's a second class citizen.
No, F# has a lot more OO than OCaml, and there’s a significant difference in features (ex active patterns in F#, functors OCaml). I liked what I used of F#, but for any serious program it’s more Multi-paradigm than functional, since you’ll end up doing a lot of OO.
In my experience that isn't quite true. You usually OO for IO/interop if a C# library is being used, then its module code for the most part all the way down (e.g. ASP NET Core define a class for the controller, then have it interop to F# FP code for the most parts). With some newer F# frameworks you don't even have to do that these days.
Having some experience with large scale F# codebases its rare you define a class compared to records, unions and functions. 100's of functions, 50-100 small types, 1-2 classes approx is usually the ratio I've seen for a typical microservice (YMMV).
I have a very different perception of OCaml compared to you (compared to most people?)
When I think of OCaml, the concepts that come to mind are brilliant French computer scientists and hedge funds. A practical Haskell. When I think of F#, I think of... .NET, bright enterprise programmers who want to work with a tolerable language, and that's about it. If forced to name a user of it, I'd say "uhh, no idea... Maybe Stack Overflow?"
(That's entirely aside from the relative merits of both languages, which are leaps and bounds ahead of both most OOP and functional languages.)
I've seen both being used by hedge funds and finance/banks actually. A lot of F# use anecdotally is closed source finance (this has changed now I think) which is why IMO it didn't have as much open source visibility or people showing its use. OcAML is probably in a similar boat. Having hidden use cases however means breaking changes in the language are harder to judge.
OCaml object system is better than what .NET offers, IMO. On one hand, it enforces clear interface/implementation separation ("classes aren't types"), while structured typing for objects makes this arrangement easy to use in practice. But then there are also powerful features such as multiple inheritance.
The biggest quirk coming from something like Java or C# is that you can't downcast. But classes can still opt into this ability (by using virtual methods + extensible variants) where it makes sense; and in most cases, the presence of downcasts means that a discriminated union is probably a better fit to model something than a class.
It's things like that that makes OCaml what it is. It supports OO even if you don't use it all the time, it supports imperative constructs. I remember reading in "Le langage Caml", by Xavier Leroy, that you should use an imperative loop over recursion if the loop is simple, and keep recursion for complex use cases, where it makes sense. That's not something you often hear from functional programmers, probably because the ones we hear are more obsessed with purity than practicality. But it's a great way to show OCaml's values.
Was pony really a Cadillac? Cadillacs are supposed to be comfortable and large (and not fast). Part of the design (if you believe Robert virding) of erlang's process system is that it makes error handling (an important part of being a developer) very comfortable because it just does the sane thing with little or no effort. Pony, by contrast, obsessively required you to handle errors and shoehorned you into a theory-inspired actor system , missing the point that erlang processes are practical units of composable failure domains, not theoretical actor concurrency units
Yeah I was worried about that part giving offense, but the sentiment became a lot longer when shifted from “language that nobody uses” to “language that I can't convince any employers to let me use because they are worried about hiring problems” lol
It also makes me think of CircleCI where they stayed in Clojure for quite some time - it really didn't have that much need for libraries (and the ones it did need, such as AWS, were provided by Java).
When evaluating whether to use a non-mainstream language, the rule I use now is:
- will I need to interact significantly with the outside world in a way that can only be done with libraries?
- if not, do I gain a lot with this non-mainstream language?
That contrasts against how I used to do it, where I viewed it as a trade-off between the ecosystem and the advantage of the non-mainstream language.
It's not just the libraries, it's the tools, and those are a much bigger lift. I noticed it with Scala dropping Eclipse support and some users shifting to Kotlin; you couldn't have a clearer example of a strictly worse language, but JetBrains and Google are supporting it, and the difference between a good IDE and not is huge. And when I tried to step up and fix the Scala Eclipse plugin myself and saw what kind of byzantine tower of confusion goes into making an IDE I started to have a bit more sympathy for that kind of decision.
> you couldn't have a clearer example of a strictly worse language...
A language that does NOT have as many features and limits more what you can do is NOT a strictly worse language. You can never say a language is worse than another, anyway, in general: it's always relative to what usage you have in mind. Your apparent disdain for a language just on the basis of the language features shows that you have a lot to learn about language economics, mentioned in other threads here.
On the contrary, it takes zero knowledge or experience to say "hurr durr use the right tool for the job"; anyone who has real knowledge and experience should have actual views on which things are good and bad overall.
I’m not one who values languages based on number of features alone (otherwise C++ or C# would be my all go), but based on the synergy of them. I think in this case, Scala is a really elegant language with many features that all come from some simple to understand primitives, for example everything is an object, that creates a highly coherent language.
This is in contrast with Kotlin, that tries to gain popularity by including many features, but always feeling “just sugar syntax over Java” to me.
> Kotlin is strictly worse if you value language features above all else. In that, there are several (several!) features it doesn't have.
Could you please elaborate on that? It was my understanding that Kotlin did everything that Java did (or any JVM-based language) but actually added first-class support for basic features missing from Java that required magic sauce like Lombok to fill in the gaps.
It's never had working error highlighting for Scala. I filed a bug where using a parameterized member type was incorrectly highlighted as an error, the next version using a parameterized member types was never highlighted as an error. I filed a bug with a case that should have been an error wasn't, the next version my original bug was back. I gave up at that point.
Heh, that's a good reference I hadn't read before. I feel like maybe Clojure and OCaml were strike one and two followed by a home run with the F# and dotnet call. Prob feeling real smug right now with aot, wasm, and hot reload being first class citizens haha.
How depressing. Probably true. But depressing nevertheless. Bigger frameworks, more complicated libraries, deeper multi tiered tooling, all of these things that we call ecosystem, reduce access to general purpose programming and creativity. We've created a bureaucracy of execution so complicated that we need vast amounts of funding to keep us at the tiller doing the biddings of e-commerce apps. It's like the founding fathers of programming have been reborn as Sir Humphreys.
This is how it is in every applied field. It's not as if 2x4s fall from trees and we use them in our construction. The 2x4 is a specific manufacturing output that's used as inputs for lots and lots of other things. It's the same way when it comes to software. Just like when doing cabinetry nothing is stopping you from processing wood yourself, likewise nothing is stopping you from rolling your own frameworks. But then you'll find you're closer to selling a Morton Chair [1] than regular furniture. (And FWIW that might be fine for your problem domain, especially if you're in the market of beautiful, high-value, handmade chairs with long lead times.)
Where this fails for me is that the 2x4 is a standard. And it's simple. And it's universally acceptable. I can send my wife and my kids alike to buy one from who knows where with little to zero instruction. They can work with it with ease. The "ecosystem" fades into the background. But modern software ecosystems are the exact opposite. You spend more time trying to learn/master/navigate the ecosystem than you do working with the metaphorical 2x4. To make matters worse, getting a 2x4 has been temporally stable for a long time. Not much has changed since my grandfather could send me to the store to buy a 2x4 on my own. But software ecosystems evolve and migrate weekly. Your argument demonstrates that all developed fields create ecosystems. But it does not address the issue that not all ecosystems are equal. Some are good. Some are bad. It's my opinion that modern software ecosystems look more like the British Civil Service than the ecosystem that produces 2x4s.
Let me refer back to Brooks’s famous paper: there are no silver bullets. There has been no order of magnitude increase since the appearance of the first managed languages, which is multiple decades now. According to him, the only way we can somehow “cheat” are way into more productivity is through ecosystems, that is standing on the shoulder of giants.
Like, the only reason are computers are remotely working fine is that great deal of abstractions.
If you're doing things on servers that you manage yourself and not using lots of saas, you can probably still do things in an obscure programming language.
Last year they discussed why they didn’t choose rust (predicted 12-18 months to build a runtime that met their requirements) or erlang (performance too slow):
That's me. I discussed why we (I am no longer at Wallaroo) made the decision about 5 years ago to use Pony to build a product that is no longer what Wallaroo is selling.
I guess it makes sense from business point of view, although it is a pity that Pony is losing what is probably the only commercial user (that I am aware of).
I remember becoming curious about Pony because of how much they talked it up, though I never got around to trying it. Feels weird to see the pivot, though I've felt for a while Rust or Rust-inspired languages will become a larger and larger part of the future of software so I guess I shouldn't be surprised.
I evaluated Pony 4 years ago, and walked away due to 2 technical issues:
- garbage collection
- no mechanism for synchronous access to actors
We ended up building a C++ actor model, with it's associated headaches. Yes, we still have race conditions and some developers invoke the function directly instead of using messages, and yes, sometimes we will grab a locking mechanism to do synchronous access, but at the end of the day, the performance of real code (embedded gaming system) is all that matters, and we did meet our performance goals.
Having said that, I've given up looking for an actor programming language and started building my own. Essentially, C++ like with Actors and compiler validated tracking of resources across actors (so the compiler knows about locks and actors). Still working on the compiler (compiles to C++20) and maybe 18 months away from public reveal. But the initial batch of test apps look very nice (but without compiler validation of resource sharing at this point in time). Very terse code, and implementing a compiler that outputs C++ was much easier than I originally feared. And no, there are no forward declarations or header files, compilers are run on modern workstations that have the grunt to produce the necessary scaffolding (from the project dependency script).
That is presumably intentional, since if you could access actor state synchronously it wouldn't really be much of an actor. This sounds a lot like a "doing it wrong" kinda problem.
> (but without compiler validation of resource sharing at this point in time)
Not sure how you'll do this without garbage collection (like reference counting, which is what Pony uses iirc).
Honestly I suspect if you continue forward you'll probably walk away with much more appreciation for Pony's design decisions.
+1 actix.rs is pretty well-known and gives you the benefit of a full programming language if you need to do other things. Rust should be really easy to onboard to if your devs already know C++ and while the compiler is a lot slower, there's enough posts online about Actix that you shouldn't have issues searching around.
With 21st century developer workstations having >= 8 cores, >= 2Ghz, >= 16Gb memory, it is ridiculous to have the scaffolding that Rust, Zig and other modern languages have. Source files dont live on their own and shouldn't be compiled as isolated units, they live in projects and need to be compiled as a batch. I'm stunned to see that so many developers are used to needing scaffolding, they just accept it as being necessary, but when you step back and think about it, it no longer makes any sense to require it. C and C++ get a pass here since they were designed in the 70/80's, but languages that have been designed from 2010 onwards must be ridiculed for targetting a PDP-11 era development machine where each compiler unit has limited access to resources. The end language I'm designing has smaller source files, however the compiler is a lot more complex since it auto generates the scaffolding (header files, forward declarations, etc).
It's still a fair bit away, I just got the proof of concept compiler which outputs C++ working (and the compiler is written in the language itself, so I'm dog fooding the project).
There are people commenting that Pony was the wrong choice from the get go, but many of us have only heard of this startup (which operates in a saturated market) because they were using Pony.
As the head of engineering at the time that we made the decision, that had nothing to do with it, but I suppose it can't hurt.
That said, I don't think we ever got any business as a result of folks having heard of Wallaroo from any of the promotion we tried to do around technical decisions.
We did however attract interest from investors who would notice when our technical content did well on HN. It certainly didn't hurt during fundraising.
Our decision to go with Pony for the original product we were building (high performance, finance focused, stream processing) was in a large part driven by time to market. We didn't have runway to build our own high-performance runtime for the product and using Pony got us that runtime.
You can hear me talking more about the decision process here:
There was more to it than that, but the "we get a runtime that can meet our needs" was a huge, huge part of why we picked Pony. I have often told people that we didn't so much pick a language as we picked the runtime and the language came along with it. That's not really true, but it's not really wrong either.
> However, in the meantime data science and machine learning continued to explode, and we found that customer interest in deploying machine learning models far outstripped demand for more conventional stream processing data algorithms.
My experience is quite similar : I either end up working with clients who are already using Spark etc, or they don’t have a mature data engineering lab. In both cases, deploying a model to production is always the most challenging.
I hope their MLOps strategy is better, than just binding to MLFlow. This tool is absolutely not ready for prime time, and plagued by bad product decisions.
Just a thought. Rewriting the core platform from one language to another just takes huge resources for orgs who are in high growth path. It feels like a side mission, derailed from core value that is solving the problem statement. Rewrites are generally a red flag for me, especially early in an orgs history.
The product Wallaroo is building sounds really interesting, (It's so appealing I honestly checked out their open positions), but it's really difficult to get a feel for what exactly it's like to use their product now.
The description and outcome statistics make it sound _excellent_ and their technology stack makes their claims seem viable. However, and I say this as someone that is currently running an MLOps RFP, after looking at every page of their site and all their open job postings I have no idea what it's actually like to use their product.
Thanks for checking us out. We're working improving our website and our explainer materials. Would you be interested in a quick conversation? I'd be happy to do a demo and get some feedback on ways we could improve the first impression. you can reach me at `andy at wallaroo dot ai`.
It is absolutely batshit crazy to start a business with new but immature tools like Pony. Unless investors are throwing money at you and you don’t care about building a long term successful business. All developer tools take years to become solid and well supported. And most of them fail to get any major developer mindshare. If you think your unicorn snowflake project needs a new immature but oh so shiny programming language to be successful then you are simply wrong. Not even “likely to be wrong” but 100% guaranteed to be wrong.
One of the hardest things at any company is recruiting and retaining top talent.
I’m sure some devs would love to learn a new language at a new job, but many would see it as a career mistake. Your career income increases the more you master a certain discipline like a certain language. If you’re a jack of all trades (and adding Pony developer to your resume), you’re probably limiting your future income potential.
> Our new Rust-based platform recently handled millions of inferences a second of a sophisticated ad-targeting model with a median latency of 5 milliseconds, and a yearly infrastructure cost of $130K.
Were these run on CPUs or GPUs? How many of them?
Last I looked at running Tensorflow models on CPUs it was really slow, so slow we had to abandon it.
Pony is a younger than Rust and has far smaller exposure. It isn't strange that it didn't worked out. Using the shiny new thing is rarely a good choice.
Technical issues were never the problem with the first product we built at Wallaroo. Pony met all of our technical needs for that product. We never got product market fit with it.
Rightly, around the time I left the company, they pivoted and started a new product that they thought would have better product market fit and they started over with a new codebase as it is a rather different product than what we first built.
Arguably the non-technical issues are equally important and, according to the essay, those were the reasons they ended up choosing Rust going forwards.
>Pony [...] has a smaller community, and as a small startup we were better off not having to solve problems outside our core domain
>wealth of available libraries
>access to a large community that will guarantee ongoing future support
>a wide variety of tools
>much larger pool of engineers who are eager to learn and work in Rust, or who already have significant Rust experience
>more resources available for learning Rust, and more opportunities for participation in conferences, etc.
Didn't say that Pony was bad for some technical reason. It was/is bad choice for a startup to use due to being a relatively new, niche language.
I don't read this as "it was a mistake to use Pony." They state that:
> in the meantime data science and machine learning continued to explode ... With the increasing focus on MLOps, Rust became a better fit
This sounds like, "Our business goals shifted, as did the industry around data science, and Rust's maturity improved such that it made sense for us to migrate our stack."
Mistake or not is hard to quantify without presenting what qualifies as success first. Is "it was able to achieve our initial goals" success or is "it was able to grow with our business" success? For each of those how much ability is enough to call it "not a mistake"?
I think it's fair to say looking back it was a mistake to pick Pony on the assumptions it provided marginal benefit for the exact needs of the business day 1 but at the same time I think it's fair to say that picking Pony was not a mistake as it allowed them to get where they are today.
Wallaroo started as a company that was building high-performance data processing infrastructure for real-time trading and adjacent systems. Wordpress wouldn't really cut it.
For some startups, that makes sense, yes. For other startups, it may not. See pg's classic essay (that I'm not 100% sure I agree with, but, we're on hacker news) for one example of why you may want to use a more niche langauge: http://www.paulgraham.com/pypar.html
And Rails was extracted from Basecamp. I think startups depend so much on the few firsts individuals that it's hard to have a hard rule about what to use.
I think that might be more obvious today than it was five years ago? In 2016 rust was already ahead of pony in adoption, sure, but they were both young interesting languages. The five years since have widened the gap considerably in maturity (of the core language as well as community/ecosystem) I think.
Rust had Mozilla behind it, a well-known relatively large tech organization/corporation. Most, if not all, popular languages nowadays tend to have/had some backing behind them.
OTOH an interesting language can be a recruiting draw. It probably helped them recruit engineers who were interested in the distributed systems and concurrency problems they were trying to solve. See for example Jane Street with OCaml.
[edit: oops, thanks for the heads up on the spelling :)]
OCaml is old and reliable, and I think it was already proven to work in the industry when Jane Street chose it. Even if it wasn't the best programming language ever, it was still a strong choice at the time, and still is. Pony on the other hand was and still is very young. I'm not saying it's a bad choice, but it's definitely not the same risk profile as OCaml.
It's just OCaml by the way, for Objective Caml. Unless you're talking about the secret Irish fork /s.
> I think it was already proven to work in the industry when Jane Street chose it.
Not really. Outside of Jane Street OCaml has scarcely been proven to work in the industry now. As a big OCaml fan and former OCaml professional, I say this lovingly: it was (and remains) popular in academia and that's mostly it. And Pony is roughly as old now as OCaml was when Jane Street started using it.
The actual reason OCaml's risk profile was much lower was because it effectively has the backing of the French government and academy, which is quite the boon.
IIRC Jane Street chose OCaml basically because Yaron Minsky was brought on as CTO, he had worked with it in school and was a fan of it, and they knew that for the sort of work they were doing OCaml would give them an edge (speed of development and runtime efficiency) and they calculated that its relative obscurity and poor community support wouldn't be a liability for the sort of work they were doing. And remember that it was the year 2000 - Perl was basically the only language with the sort of library ecosystem (CPAN) that is expected of languages now: poor community support was much less of a liability then.
> Outside of Jane Street OCaml has scarcely been proven to work in the industry now.
I think it depends on what you're working on. If you're building anything that looks like a interpreter/compiler, it's probably one of your best bets. If you're working on stuff that needs a lot of libraries, and relatively obscure ones, it's probably one of your worst bet. If you need good interaction with Windows, it's probably not a great choice either. The businesses I know, which are mostly SaaS, would probably fall under "not the best choice, use with caution". If that's the general case, I agree with you.
> The actual reason OCaml's risk profile was much lower was because it effectively has the backing of the French government and academy, which is quite the boon.
I wonder how much Jane Street benefited from that. The classes préparatoires are still using OCaml to this day (or at least were 3 years ago), and that's usually the best students of France. I've also heard that Facebook recruited quite a lot, for Hack and Flow.
> And remember that it was the year 2000 - Perl was basically the only language with the sort of library ecosystem (CPAN) that is expected of languages now: poor community support was much less of a liability then.
That's a good point. I think OCaml still has a better package manager and build tool than some really popular languages (I'm thinking specifically about Python), but it's hard to beat the ecosystem.
I'm not sure, AoE2 taught me that camels always win against horses. And if you consider Rust to be OCaml's child (which is kind of true if you really stretch things), it seems like young camels win against young horses too.
There is a downside to using an interesting language as a recruiting draw; you are going to get people people who are more interested in the particulars of how you solve a problem than in actually solving the problem.
Sure, but if you take the time to actually read the article you'll learn that Pony was at the time the only language and runtime capable of meeting their needs.
You'd also learn that this is a different product with different requirements.
There's no such thing as the "best language" - there's the "best language that fits your problem domain".
C++ and Java did exist, and rejected for the following reasons:
"Furthermore, the existing Apache tools depended on Java - specifically the JVM - where it's really hard to get predictable, very low latency results."
"From a purely performance perspective, C or C++ would have been a good choice. However, from past experience we knew that building highly distributed data processing applications in C/C++ was no easy task. We ruled out C++ because we wanted better safety guarantees around memory and concurrency."