I am waiting to see what becomes of Arthur Whitney's Desktop and Server OS based upon his language K5/K6.
KDB+/Q (database and language) is a small download, but is very powerful.
I had heard of Inferno and Plan9, but aside from some YouTube videos and links, I never really looked it over or took it for a drive. I'll have to check this out; after all, it's amazing what nuggets are buried in the recent past of CS.
Here's some timing tests done in various languages (C, Java, Perl, Tcl, and others) in 1998 compared with Inferno [3]. Brian W. Kernighan of K&R fame was one of the author's at Bell Labs!
And here's a comparison to Arthur Whitney's K language, I believe not too long after using the same code to compare with K on a 100MHz Pentium [4]. K is significantly faster and shorter in LOC.
The tests were mainly loops, text and stdio. APL derived languages like A+, K, J, or Q don't usually do loops, they are better with binary, and memory-mapped files, but K did very well in the benchmarks despite those factors.
As attractive are kx's claimed numbers, you'd need to be a highly motivated company to embrace it. The single character for everything culture is impenetrable and unreadable, thus very high bar for maintenance at the organization level.
Just providing readers with a different take on things: I've been coding in K professionally, after coming from a C-heavy (not functional-programming!) background, for about three months. It only took a few hours with the language to have the apparent line-noise make sense, and after a few weeks I could decipher the function of most snippets I see.
There are great reasons why a dev team might not want to switch over to K for its operations (or switch languages at all, really), but seeing K in action, I have to respectfully disagree that readability and maintainability are those reasons.
I've followed a lot of arguments over the accessibility of k and I'd say what it boils down to is that like Lisp or Forth, for a passionate few, it just fits how they think, and for the rest it doesn't. I tried it for a while, but found the paradigm too rigid for general purpose programming. Also, I got tired of the mental effort it took to read/write code where every line is as readable as a long regex.
> very high bar for maintenance at the organization level.
First, it's much easier to learn k/q on a good enough level than learning C. Second, this apparent bar has a nice effect on the salary you can get for it.
Given his predilections, I guess the austerity of the language will be reflected in the user interface, too. So my bet is that this won't be a new Oberon, QNX or Inferno, but closer to ColorForth.
Could you elaborate a bit on what kind of commercial systems you used this for? and what was the role of Inferno in it?
I've been really interested in playing with it since my college years, but every time I got to poke around I felt like it wouldn't be (in my very uneducated opinion) fit for something in production.
However the idea of the OS is very appealing to me, so I would love to hear more on how does this fare in the real world and what would be a compelling use case.
My friend used it as the OS for an acoustic levitator built in the US and sold to a Japanese Company - he got sensible hardware control and a GUI toolkit all in the same OS.
I worked for a startup trying to build a phone - I'm NDA on that so can't say much about it except we incorporated Sql lite into it easily enough - the beauty of an OS in C, tack on a 9P interface and you don't need drivers or headers in your Limbo code.
In fact that's the greatest takeaway from Plan9 and Inferno for me. Add 9p to your server and you never need native drivers again - it's a bit like REST. Once you can mount 9P you compose all sorts of stuff.
The canonical example is TCP. If I can mount your /net/tcp on my file system I can use your network stack. Even if I am only connected to you with a serial cable. And it is on a per process basis so I can have two shell windows importing two network stacks from different remote machines and use two networks and they are separate.
I'd be interested in talking about the phone project in more detail... I'll even sign an NDA if the startup still exists, or just generally talk about Inferno. We have probably communicated on the Inferno list in the past, actually, but since neither of us have contact info in our profiles you can reach me at this disposable address for the initial contact:
If you think the Bell Labs school of OS design is that interesting, have a look towards ETHZ, where a lot of that originated (as the ultimate minimalistic filtration of Xerox, of course).
the time for grand OS ideas has long passed. here's an eulogy by one of the authors of plan9 and inferno (you won't hear anything from the authors of other operating systems except tannenbaum):
No it hasn't. There's a ton of cool shit that hasn't been invented yet. What's stopping research is (as rob more or less states) researchers want to make a usable system more than they want to make an interesting or innovative one. It's going to happen again someday. It has to, a gaping hole like this will get filled in.
Also, in the sixteen years since then some of the core Plan 9 ideas like network block devices, union filesystems, user-space/user-managed mount points and other use of namespaces have indeed made it into other OSes.
Perhaps not with the purity that the Plan 9 developers hoped for, but in the end we use computers to get work done.
Despite of the pessimism, Rob's talk always has lots of refreshing insights and inspirations. But I also wonder that after several years in Google, how Rob would see systems software research today. (He is not doing OS in Google; but since he claimed golang to be a "systems language", he uses "system" in a broader sense than OS).
This odd broadening of the term "systems programming" appears to be endemic to Google culture and is likely something he picked up after working there.
It's worth paying attention to this if you are ever thinking about going to work at Google, because it is possible to have extensive conversations ahead of time about the sort of work one is and is not interested in doing, and still end up completely failing to communicate. "Library", "tools", and "file system" are other terms which have Google-specific meanings.
This wouldn't be so much of a problem if you could just apply for a job directly, instead of having to be hired by Google generally and then getting assigned wherever they think you'll fit. (Is that still the practice, or have they improved things?)
nickpsecurity may expand on this, but there are many, many OS innovations dating back to the 60s, 80s, and 90s which haven't made it into the mainstream. Of all the things, I wish we would have gotten something of a combination of the ideas in Oberon and Eros, to have a high level systems language plus a secure system. I believe now is a better time than ever, given the lowered barriers to construct a device on top of a custom OS and spread it wide and far. The manufacturing chain has become more accessible than it was in the past and we have more viable candidates as systems languages to breath sustainable and risk-aware, strong life into such a machine.
When I look at the architectures being pushed into developers via mobile OS (iOS, Android and Windows Phone), as well as, their development environments, I kind of see glimpses of those ideas.
On the FOSS front I don't expect much innovation on that area, as the alternatives seem to just be yet another POSIX clone with the exception of unikernels research.
Quite a few features have moved into mainstream. Often not as consistent as before. There's actually plenty more in the making over past decade. The comments here would make you think there's not innovation in CompSci but there is. JX is a nice, recent example where they do a capability/isolation-like architecture, implement it via language-based security in VM, make it faster than many microkernels, put half the drivers in VM, and run that & other parts of drivers on microkernel for lower risk. Already have a web server to show usefulness. Clever stuff that might be good in appliances maybe with MINIX 3-like self-healing.
For old ones, Burroughs stays top on my list given they scientifically designed a HW/SW combo from a language report (ALGOL) to be ideal machine for it. You usually worked closer to algorithm level instead of low-level with good reliability/security. KeyKOS had fine-grained isolation of everything plus persistence where your data... whole running system... would likely survive a crash. VMS's clustering and central support for clustered apps was so good that you basically just configure the boxes, tell the apps to use OS functions, set policy, and you're good to go. 17 years uptime on highest one. Convergent's CTOS and Tannenbaum's Amoeba both made a collection of workstations act like one mainframe or minicomputer OS with all resources shared for max utilization. We have Grid Computing but clean integration.
The most interesting stuff for developers and hackers was in LISP machines, esp Genera. Features from LISP that should go into every programming language: interactive running commands for testing; incremental, per-function compilation; live updates of that to running app for instant results. Combined with safe typing, this would let you crank out code like lightening. The OS itself was implemented with this high-level, highly-debuggable, live-updating language. It came with source. So, you could trace a problem in your app from it to system call seeing actual source code of OS along with current, running state of how it was being [mis]used. You could then correct the system if you chose to keep it running. Live updates and modification of your own running system with same language from apps all the way to bottom of OS is just unheard of today. Needless to say, they rarely crashed outside of hardware failure. :)
Far as mobile, most of the work is going into isolation and virtualization mechanisms, some with TrustZone, etc. OKL4 was best result there. I don't follow mobile as much & search engines are often clogged. Some Google wrangling got me these innovations.
Reflex addresses heterogenous programming of fast and low-power CPU's. Manual, tedious process. They applied old concept of software DSM to make it easy. I'm a DSM fan since supercomputer days so that's awesome.
Alright, I've thrown a few others above that illustrate different ways OS research is going. Last two are modern grid OS's with one quite innovative in that it works on ad hoc networks whose assumptions are as bad as WAN's. The 2nd one has lots of references to interesting systems. Should give you a taste of clever stuff going on in mobile and grid that you might have missed.
Agreed, but what made it into mainstream computing we are aware of and touch and fiddle with? I don't consider a deeply embedded baseband processor to be mainstream outside the handful of radio engineers.
It's crazy how Burroughs (B-5000) solved many issues elegantly and Intel's team with i960 lost the internal challenge to the x86 group. It may have been before its time to some extent, but worse isn't better when it concerns the underpinnings of computing.
Barrelfish is very nice but purely research, OLK4 used the Dresden design and took efficient microkernels to another level and managed to deploy into millions of devices and that's great, but the only mainstream microkernel'ish remnant I could point to that is active and modern is NT's userspace graphics and audio drivers. There are similar things on Linux but not quite the same. I find it telling when network gear switches from vxWorks or QNX to Linux with extra kernel modules. Market forces and maybe the illusion of more linux developer might be the reason, but it's not too smart.
The success of UNIX in the 80s led to the unsatisfactory OS architectures we're using today. It also led to the rise of C and the introduction of trivial attack vectors all over the place. Even plain Pascal was much safer and used on Apple systems above ASM.
"Agreed, but what made it into mainstream computing we are aware of and touch and fiddle with?"
Many things made it into mainstream. Use of high-level languages with GC and automated checks. Mainframe model and many techs reinvented in cloud market and Xeon CPU's. Clustering is mainstream. Microkernel and hardware-up approaches dominate mobile research and products. Highly concurrent desktops like BeOS anticipated. I'm still waking up but this is what I remember off top of head.
"I don't consider a deeply embedded baseband processor to be mainstream outside the handful of radio engineers.""
Damn, my sleepy ass might have read it wrong. Still might have been right to cite it given things like Samsung Oxynos, big cores and little cores, are getting more prevalent. Low RISC has main core plus tiny cores. Even embedded are mixing cores of different strengths. What it gets you is significant boost to concurrency with lower power and cost. On other end, you can emulate Channel I/O from mainframes to get great throughput:
"It's crazy how Burroughs (B-5000) solved many issues elegantly and Intel's team with i960 lost the internal challenge to the x86 group. It may have been before its time to some extent, but worse isn't better when it concerns the underpinnings of computing."
My thoughts exactly. Burroughs is being reinvented under crash-safe.org with functional programming on top. Draper is integrating the HW enforcer with RISC-V with open-source plans. Some hope. i960 was clever. Lessons learned is to ensure compatibility with language or whatever is popular to piggyback on popularity. CHERI is doing that. Otherwise, you might die off.
"but the only mainstream microkernel'ish remnant I could point to"
Microsoft is taking the lead integrating cutting-edge tech like driver mitigations. Unfortunately, mainstream desktops, etc mostly ignore this. However, in embedded, there's lots of microkernels in use. One mainstream product, Blackberry OS & Playbook, used one of best ones called QNX Neutrino. Speed, responsiveness, and reliability are excellent. Too bad on network switches as Linux isn't going to beat QNX on reliability. The reason might be that they're usually deployed in HA configuration that survives crashes anyway. Long as not byanzatine failure, that works well enough for corporate environments.
"The success of UNIX in the 80s led to the unsatisfactory OS architectures we're using today. It also led to the rise of C and the introduction of trivial attack vectors all over the place. Even plain Pascal was much safer and used on Apple systems above ASM."
The success of System/360 + COBOL, CP/M + DOS + x86, UNIX + C, and recently Mach + UNIX + Objective-C. It was a team effort. :) Far as Pascal, you might find design and assurance sections of the kernel below interesting. It was written in Pascal for safety and first secure kernel.
Btw, if you want, I can try to dig out an interview with Dr Schell that I found that traces his invention of INFOSEC field from the start. It's a long interview but was very worth it. Many surprises along the way like corporate types wanting high assurance but IT industry fighting it. And Burrough's legacy going further than you thing... even into Intel x86 CPU's. Not everyone wants to spend 30 min reading an interview though so I understand if not.
> Blackberry OS & Playbook, used one of best ones called QNX Neutrino
Unfortunately, the devices Blackberry created didn't sell enough and now they're trying to be another Android manufacturer. It's sad, but a kernel alone doesn't make a user facing device.
> interview with Dr Schell that I found that traces his invention of INFOSEC field from the start
please do
> Burrough's legacy going further than you thing... even into Intel x86 CPU's
How so? Do you mean the half-hearted attempts by Intel like their trasactional memory extensions or mpx? The B-5000 was great for writing implementations for GC'ed languages.
I'm not going to spoil it. What I will say is that most thought Schell and Karger invented the security stuff with Anderson of Anderson Report being some suit or manager. We also thought Burroughs stuff happened in parallel, but separate, from what led to Orange Book and Intel security extensions. I always thought Burroughs B5000 tech was too good to be isolated in history. We also thought businesses, as often said, saw no value in real INFOSEC and instead wanted checkboxes.
Interview will counter all of that. I especially liked learning how the first, certified system and demonstrator, the SCOMP, was built. All this time I thought government funded it out of a belief in INFOSEC. Wrong again with another surprising answer.
I've been playing around with it on[1] and off, mostly on the Raspberry Pi[2]. It's very nice conceptually, almost impossible to use practically except for research purposes.
It's not too bad for practical use, Vita Nuova uses it commercially while the site is not much to look at, the guy behind it is still active. Inferno is also nice for making lightweight, networked systems and as such leans itself nicely towards use with a pi.
Well, code reuse is a pain, for starters. Limbo is not something people actively develop in, there are very few native ports of networking protocols, and you need to rebuild the OS if you want to expose hardware functionality.
I've written a good bit of Inferno and one thing that's really neat is how closely related Limbo (the programming language) is to Go. There's good reason for this, of course-the guys who wrote Inferno went off to Google and made Go. The Tk GUI capabilities are a bit annoying to work with, but it's really not hard to whip up a GUI application in Inferno quickly--all the while getting to interact with those 9P servers which make data exchange really simple.
Inferno runs strait off Linux (you start the emu program).
Plan9 also has 9vx or so.
Plus with plan9, all you need to start a workstation is 9pcf (the multiboot PC kernel) and a plan9.ini. You can literally just hand those two things to qemu and mount the rootfs from a file server with almost no setup.
Ah I forgot hellaphone! I miss having a 9P interface to my texts that I could just mount and use with some rc scripts on my terminals. Sucks that I have an iPhone now where this is probably impossible or insanely hard to do.
If you want to make a 9P interface to SMS without running Hellaphone, you should start by looking into the RIL daemon. If I remember correctly, it runs at the Linux level (not Java) and drops a socket somewhere. The Java libraries for making phone calls, texts, etc. work by sending commands to that socket. You can read/write that socket yourself and use it to e.g. send SMS
I had heard of Inferno and Plan9, but aside from some YouTube videos and links, I never really looked it over or took it for a drive. I'll have to check this out; after all, it's amazing what nuggets are buried in the recent past of CS.
Here's some timing tests done in various languages (C, Java, Perl, Tcl, and others) in 1998 compared with Inferno [3]. Brian W. Kernighan of K&R fame was one of the author's at Bell Labs!
And here's a comparison to Arthur Whitney's K language, I believe not too long after using the same code to compare with K on a 100MHz Pentium [4]. K is significantly faster and shorter in LOC.
The tests were mainly loops, text and stdio. APL derived languages like A+, K, J, or Q don't usually do loops, they are better with binary, and memory-mapped files, but K did very well in the benchmarks despite those factors.
[1] http://kparc.com/
[2] https://kx.com/
[3] https://9p.io/cm/cs/who/bwk/interps/pap.html
[4] http://kparc.com/z/bell.k