In the 90s our sector started to look (again) at VMs as viable alternatives for software platforms. The most prominent example of that is Java. But Inferno was born at the same time at the other side of the Atlantic Ocean[0], sharing the same goals: an universal platform intended for set-top boxes, appliances and the like, with the same write once, run everywhere philosophy.
Inferno programs ran on top of a VM called Dis and programs were written in Limbo (designed by Rob Pike). If you use Go nowadays, you're using a descendant of Limbo, as both share a lot of syntax and concepts (and Limbo descended from Aleph, by Rob Pike too, but that's another story)
Going back to 90s. This space was in a very "exploratory" state then, and any technology had to probe it's superiority to gain adoption. For example, Inferno creators implemented Java on Dis[1] simply to demonstrate how fast was Dis vs the JVM.
I don't know exactly why things were the way the went, but I suppose Sun had more deep pockets to push it's technology and Inferno ended as a the rarity we have today.
From a pure tech standpoint, I've studied the Dis VM to write my own (incomplete) interpreter[1] and, IMHO, it's design is far less abstract than the JVM. It seems to be too low level and tied to that era processor design. That make it less future proof (but that's only my own point of view, of course)
Inferno was a Plan9 derivative kinda research OS, not necessarily what people wanted.
Squeak/Smalltalk had the issue of programs and runtime images being conflated, making version control awkward (at the time at least).
Erlang is a functional language.
All the above had relatively small and poorly documented libraries.
Java combined an imperative C-like and relatively simple language, with an evolvable bytecode-oriented VM, garbage collector, large and well documented libraries + lots of tutorials, and really intense VM engineering. Easy to forget but Cliff Click and colleagues were the first to show that a JIT compiler could produce code competitive with gcc, Sun also bought/implemented stuff like deoptimization which is still relatively advanced, and then gave that tech away for free. From the end user POV all these things mattered a lot.
You express beautifully the Worse is Better philosophy. Worse is a preferable option to better in terms of practicality and usability.
Innovation loses to taking small steps to improve the worse alternative. Community is built around tinkering and fixing errors, getting around design failures.
To some extent yeah. I definitely became way less radical and more incremental in my own design approach over time. Still, "better" does contain usability, performance, docs and other aspects of practicality. Like, those aspects do genuinely make things better. Innovation doesn't always improve things, it can be a dead end of bad ideas too.
Java was a pretty major clean break when it came out. It was probably right at the limit of what could be done innovation-wise whilst still being targeted at the mainstream.
With the possible exception of Erlang/BEAM, none of the candidates for "better" of OP cited ever (ever) actually got tested in the field -at scale- to see if it actually delivers the goods. So it is merely an elitist opinion of a niche subset that apparently thinks its take on these matters is beyond question. Gabriel wrote his essay about two actually deployed alternatives, with the "worse" option (UNIX) coming on top. LISP was in fact the incumbent and UNIX the scrappy upstart. None of the OPs cites were incumbents. They are just simply exceptionally safe bets to high horse on.
Also agreed: Java and JVM are were not the "worse is better" options at the time of adoption. Hype alone did not create the massive shift to Java that occurred in late 90s. It was a genuinely positive development for the practice and people were getting results. It was Java's 2nd act of moving into "enterprise" (and Sun's mismanagement of that effort) that created its current sense of 'heaviness' of language.
People who designed Java are on the record expressing elitist reasons for Java's success and design goals. It boils down to: Java classes allow code monkeys to write spaghetti code that is forced to stay inside classes designed by others.
The reason I always hated Java was because it was designed by good programmers for bad programmers in a looking down at them way, not a good programmers for themselves to use.
btw. Java design team licensed the Oberon compiler sources years before to study it.
People that profess Java heaviness only reveal lack of historical background.
Not only were CORBA and DCOM much worse, Java EE was the reboot of Objective-C framework for distributed computing at Sun. And those Objective-C lengthy methods are quite the pleasure to type without code completion.
While they certainly could have done better, it was already an huge improvement.
For most IT programmers (who were now the Java coders) CORBA was esoterica. So the JEE specs were never appreciated for their clarity and the necessary abstractions were deemed as "ceremony". Sun did a great job on the specs. They dropped the ball on (a) the pedogogical front (your precise point actually), and (b) crippling JEE specs to induce the likes of IBM to invest in J2EE app server development.
> IMHO, Technologically Java was the worst and least innovative. Platform adaption is "Worse is Better" so Java won.
In the words of Guy Steele: And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?
Java was perceived (quite rightfully) as a C++ with the rough edges filed off. It gained a ton of mindshare by making it trivial for C++ developers to jump on the bandwagon. This was critical because in terms of desktop development (that's how all apps were built in the 90s) and even on the server side C++ was the undisputed king of app development.
Hum... You are aware that those are one where the attacker gains execution capabilities inside the sandbox and one hardware vulnerability that affects every single language, right?
Gaining execution capabilities inside the sandbox is already good enough to compromise its behaviour, e.g. everyone gets true back when is_admin() gets called.
Yes, but it's a complete mischaracterization to claim it's a failure of the sandbox.
On this specific case, it is quite a big deal to add write and execute controls to the WASM memory, so it requires further justification than "I can do stack underflow attacks on my C code". Even though "I can do stack underflow attacks on my C code" is relevant information.
In fact, everybody is trying very hard to use all the experience we've got with the 90's VM, and getting annoyed here and there when it's different enough that they can't.
God damn it I missed the holy grail era of the computing industry. It’s all so drab compared to the 90s to the early 00s. So much more experimental and willingness to try very new things
I started my career in the early nineties. It's different now. Not sure if worse or better. Different.
What I enjoyed about the 90's is that there was still plenty of meaningful work implementing published algorithms. I remember for example, coding Delaunay triangulation, our own Quartenions etc in C++. These days you just download and learn an API to do it.
What this means is now you can build comparable stuff much quicker. It's all been implemented and you're mostly gluing libraries.
Gluing stuff is a lot less fun than doing it from scratch and you learn much less but it's the only way to stay competitive.
If you want to build something rapidly to show off though, today it's much easier.
Web development seems to suck more with every year passing. I wasn't all that into it in the 90's as it was very nascent and primarily for text and images due to bandwidth limitations. But VRML was mighty impressive if you used it on a LAN.
Web devs seem to reinvent wheels more than other types of devs. Everything is hailed as a breakthrough in productivity that subsequently fails to materialize. They impress each other with how concise the framework du-jour makes the code, forgetting that troubleshooting ease is much more important.
Troublueshooting stuff was much easier in the 90's even with the simple tools. Not many systems were distributed. It was considered craziness to build distributed software unless you really needed it and had a massive budget.
Everyone builds distributed systems now, whether they need it or not.
AI/ML is amazing today. None of this was possible in the 90's we had nothing approaching the crunching power required to produce decent results. So a lot of ML work was deemed a failure even though it's being exonerated now.
I think with the resurgence of ML the software engineering field is exciting again. Things were rather boring for the last two decades when hipsters kept reinventing mundane stuff only to go back to tried tested and true (vide resurgence of SSR or RDBMS).
Overall, I think the 90's were fun. More fun than what followed. ML/AI might make things fun again.
> Web devs seem to reinvent wheels more than other types of devs. Everything is hailed as a breakthrough in productivity that fails to materialize. They impress each other with how concise the framework du-jour makes the code, forgetting that troubleshooting ease is much more important.
I really wish more people could see this as you (and I) do.
Web development is really its own horribly sheltered community. they are driven to deliver at faster and faster paces and they can't do anything properly if they even wanted to.
I was around then, and I was/am interested in things like this, and I paid zero attention to these things because I didn't know about these things, because things like these just weren't well known like they are now.
there weren't entire categories of Wikipedia articles about things like this. there weren't widespread communities welcoming new users or lots of YouTube videos explaining everything and getting viewers excited.
yes, that was the time to be into Inferno and projects like it, but the entire internet was a different place.
Maybe I just wasn't looking in the right places for this stuff or maybe I was too invested in other things, but I recall these things being terribly hard to penetrate even if you did find out about them.
From what I remember reading, JVM is a stack machine, while Dis is a register machine, right? Is there any other significant difference that would make Dis less "future proof"?
It’s less work to port it to a future machine that you don’t know the number and type of registers of yet.
If a future CPU has more registers than your VM, you have to either cripple your VM by not using all registers of the new hardware or write code to detect and remove register spilling that was necessary on the smaller architecture.
If, on the other hand, it has fewer or a different mix, you have to change your VM to add register spilling.
Either way, your byte code compiler has done register assignment work that it almost certainly has to discard once it starts running on the target architecture.
If you to start at the extreme end of “no registers”, you only ever have to handle the first case, and do that once and for all (sort-of. Things will be ‘a bit’ more complex in reality, certainly now that CPUs have vector registers, may have float8 or float16 hardware, etc)
You can also start at the extreme end of an infinite amount of registers. That’s what LLVM does. I think one reason that’s less popular with VMs because it makes it harder to get a proof of concept VM running on a system.
It isn't, unless you consider SPARC the future. SPARC had an interesting register architecture, where making a function call would renumber registers 9-16 to 1-8, and give the function a fresh set of 9-16; IIRC most SPARC CPUs had 128 or so registers total, with this "sliding window" that went up and down as you called and returned, which essentially gave you something like a stack.
The rest of the world's CPUs have normal registers, which is one aspect of what makes register-based VM bytecode easier to JIT to the target architecture, which was one of Dis' original design goals (with an interpreter being a fallback). It also happens that we know a lot more about actually optimising register-based instructions (rather than stack-based), so even if you had to fall back on an interpreter, the bytecode could've gone thru an actual proper optimisation pass.
Not an expert but I'd argue a stack is somewhat more future proof in the sense that it's less tied to a particular number of physical registers, whereas a register machine must be. Which is exactly what makes it a bit harder to optimise for, but that's what abstractions tend to do :-)
The sparc sliding register window turned out to be a very bad idea, but I guess you already know that.
I don't think the concept of register windows is necessarily a bad idea. IMHO, SPARC was flawed in that every activation frame needed to also have a save area for a registers window just in case the processor ran out of internal registers.
I think the Itanium did register windows right: allocate only as many registers as the function needs, and overflow into a separate "safe stack". Also, the return address register was among them, never on the regular stack, so a buffer overrun couldn't overwrite it.
There is a third option to stack and registers: The upcoming TheMill CPU has a "Belt": like a stack but which you only push onto. An instruction or function call takes belt indices as parameters. Except for the result pushed onto the end, the belt is restored after a function call – like a register window sliding back. It also uses a separate safe stack for storing overruns and return addresses.
Long ago, I invented a very similar scheme for a virtual machine ... except for the important detail of the separate stack, so it got too complicated for me and I abandoned it.
It wasn't really like that AIUI. Using a stack introduced hardware complexity while also serialising instruction processing (because you only work from the top of the stack, unlike a register set where you can access any part of it at any time) which caused the chip not to be the raging speed demon the designers thought it was going to be.
I'd very much like to understood what was going through the sparc designers minds when they did that. Looking back on it with my own current understanding of CPU designs and all that, they seem to have made some incredibly basic mistakes, including designing the hardware without talking to the compiler writers (a cockup the Alpha designers very definitely didn't make). It's all very odd.
Another mistake they made was apparently deciding to leave out instructions based on counting them in the code – if an instruction didn't appear very often they omitted it. Sounds reasonable but that meant they left out the multiply instruction initially, which might not have been so common in the code was actually executed quite often (e.g. in array lookups) and there were complaints about the new Sparc stations with their new superior chip, that they were slower than the 68000-based CPU that preceded it. Hardware multiply was later added.
Among all the other reasons stated, like independence from platform registers, stack-based VMs are really easy to implement -- you don't need to worry about register allocation in your VM code generator, you can leave that bit to the stage of the VM that generates native code which would need register allocation, even on a register-based VM.
In the 90s our sector started to look (again) at VMs as viable alternatives for software platforms. The most prominent example of that is Java. But Inferno was born at the same time at the other side of the Atlantic Ocean[0], sharing the same goals: an universal platform intended for set-top boxes, appliances and the like, with the same write once, run everywhere philosophy.
Inferno programs ran on top of a VM called Dis and programs were written in Limbo (designed by Rob Pike). If you use Go nowadays, you're using a descendant of Limbo, as both share a lot of syntax and concepts (and Limbo descended from Aleph, by Rob Pike too, but that's another story)
Going back to 90s. This space was in a very "exploratory" state then, and any technology had to probe it's superiority to gain adoption. For example, Inferno creators implemented Java on Dis[1] simply to demonstrate how fast was Dis vs the JVM.
I don't know exactly why things were the way the went, but I suppose Sun had more deep pockets to push it's technology and Inferno ended as a the rarity we have today.
From a pure tech standpoint, I've studied the Dis VM to write my own (incomplete) interpreter[1] and, IMHO, it's design is far less abstract than the JVM. It seems to be too low level and tied to that era processor design. That make it less future proof (but that's only my own point of view, of course)
[0] https://www.vitanuova.com/inferno/ [1] http://doc.cat-v.org/inferno/java_on_dis/ [2] https://github.com/luismedel/sixthcircle