Hacker News new | past | comments | ask | show | jobs | submit login
Plan 9 public grid (9gridchan.org)
182 points by zeveb on Feb 2, 2018 | hide | past | favorite | 43 comments



As far as I can tell, it's services provided over Plan9's '9P' protocol. The 'grid' word makes me think there's some level of decentralization involved, though it's somewhat unclear exactly how much.

I find it amusing and somewhat strange how many of the modern-day Plan 9 enthusiasts come from 4chan's /g/ board, this project included. Maybe it takes a certain level of eccentricity to value the elegance over most measures of practicality. Either way, I fully support it.


The decentralization enters in a couple different ways. First, the set of services is made from several different machines - I didn't make every single service a separate vps, but there are 4 different nodes that provide the different 9p fses that are used, and it would be possible to run each service on a separate vps, but at the moment that would be a bit unnecessary because load isn't that heavy. The other decentralized aspect is that if people want to, they can announce their own services to one of the registries that is being exported, so people could build their grid environment from more resources than just the ones exported from 9gridchan. That is still mostly theoretical right now, people are still exploring the possibilities of the set of 'default' services.


Yeah there's a couple of different groups that still use/admire/say they use Plan 9. There's the /g/ guys that seem to have a perennial interest in it for whatever reason, the cat-v.org/9front guys (of which I'm one) that still heavily use, maintain, and improve the system, and the guys that have historically used Plan 9 prior to the bell labs dist more or less dying.


Well, I wasn't really expecting this to show up here, but I guess people have been paying some attention to plan 9 things around here lately. Hi everybody! I can try to answer some questions to those who are interested. I've been working on this kind of thing for a long time and it is interesting to see this version of things getting some traction.


I’m excited about this because it’s about Plan 9 but I have no idea what it is or why I should be excited. Please tell me more.

I read the description but I’m not sure if it’s a time sharing system on plan 9 that the public can log into or another internet or... something else.


It's a collaborative environment build from 9p services. If you add the scripts to use it to your plan 9 machine/vm (or use the ANTS iso images), it will create a grid workspace with a chat, shared filesystems, ability to send links and images and documents to other connected users, and access to a registry for adding your own services if you choose. It currently has a small user community but it seems people are finding it fun and useful.


Tangential-ish question:

It's often claimed that the Linux /proc virtual filesystem has roots in Plan 9's design, and it's certainly a very powerful abstraction that allows, at the cost of some expressiveness, near-universal interoperability with some of Linux's internal state.

However, many BSDs do not support it out of the box, and I've heard it claimed that they're deprecating/have no plans to keep /proc around, favoring sysctls which return structs instead.

Assuming that both are true (/proc came from plan9 and BSDs are moving away from it), why is this the case?

Why abandon a universally-interoperable, so-simple-a-child-can-interface-with-it interface to what is otherwise a very hard-to-work-with type of system state?


Exposing kernel bits via the filesystem makes less sense the more you think about it:

* You have to build a state machine inside the kernel to handle cases like applications reading a file one byte at a time.

* If the API is complex, you get to build parsers (aka, the easiest way to introduce buffer overflows in C) in kernel-mode.

* Programs now have to deal with malicious applications capable of managing mountpoints giving fake results via the filesystem. I could link to /dev/random to /dev/zero, how many programs are going to check for that?

* You can't let the program go into a chroot jail if it needs to read the kernel's magic filesystem.

* You have to mingle filesystem access bits with kernel security checks for process capabilities and the like.

It's definitely not a simple interface for the kernel to implement, and, quite frankly, it's much more complex for a security-minded application to poke at the kernel through a filesystem than it is through a syscall.


None of these should actually be a problem in practice.

> * You have to build a state machine inside the kernel to handle cases like applications reading a file one byte at a time.

You need to have this anyway, because regular files exist. Also, the logic is very simple.

> * Programs now have to deal with malicious applications capable of managing mountpoints giving fake results via the filesystem. I could link to /dev/random to /dev/zero, how many programs are going to check for that?

Only root could do that. If an attacker has root, there are many more realistic attacks.

> * You can't let the program go into a chroot jail if it needs to read the kernel's magic filesystem.

You can add the magic filesystem to a choot jail.

> * You have to mingle filesystem access bits with kernel security checks for process capabilities and the like.

I'm really not sure what this means.


> > * You have to build a state machine inside the kernel to handle cases like applications reading a file one byte at a time.

> You need to have this anyway, because regular files exist. Also, the logic is very simple.

"State machine" is a bit of a weird term here; it's more to do with "transactional state". This doesn't exist for regular files. If you read a regular file one byte at a time and something else is changing it, there's no transactional guarantee. You might get corrupted data. For that matter, you might get corrupted/half-changed data if you're reading >1 byte at a time. This is fundamental to the Unix file model (as distinct from, say, Windows), and is also the reason that file locking facilities exist.

I think GP was emphasizing the difficulty of having to bind the code that generates the contents of files in /proc with the status of any code reading those files. The reader needs a guarantee that it'll get a "snapshot" of the /proc pseudo-file as of some time (the start of the read? The end of the read? This is arguable.) Without that, there's no race-free way to ensure that all readers don't get corrupted data that's changed during the read.

This is required even if you can assume (which you can't) that all readers will use a buffer/chunk size larger than the contents of the file, and that the kernel's updates to the data backing the file are atomic.

A state machine is a common way to implement such snapshot/tx guarantees, but the fundamental desired property is a transaction, which, while I disagree with GP on some other points, is absolutely a needed and likely hard-to-get-right feature in this area.


Another entry for your list: vulnerability to file descriptor exhaustion attacks. This (and the chroot issue) is why OpenBSD promotes their arc4random() interface for generating random numbers rather than /dev/random.


That's a useful list, thanks. It's also saddening: almost all of those items are in the "it's hard to implement" category. I get that free software development is hard and self-directed, but it still bums me out that people are willing to toss the baby out with the bathwater.

A couple of those points seem plain wrong/misleading, though:

> Programs now have to deal with malicious applications capable of managing mountpoints giving fake results via the filesystem.

If malicious code has privileges to change the pseudo-device paths for /dev/zero and /dev/random on your system you can easily be compromised regardless. Such code could sniff your network traffic, put junk data into any file (or socket descriptor, since it could probably also install a tap) your program used, and likely also debug/inspect/halt/alter the runtime behavior of the victim process(es).

> You can't let the program go into a chroot jail if it needs to read the kernel's magic filesystem.

This is technically true, but I think it would be possible to develop a convention or standard for chrooted programs that resulted in certain "capabilities" (e.g. "can it use procfs? which parts? how about /dev/zero?") being present in standard locations inside the jail. It is more implementation/standardization work, though, so this isn't a criticism--more of a regret.


You can easily remount the magic file system inside a chroot. This is a commonly used in lfs, and when installing distributions like gentoo and lfs.


I think some of the reason the BSDs are moving away from it is no one uses it much and no one really maintains it. It just never got the traction on BSD like it did Linux.

procfs should just be the place to get information on and fiddle with processes, but on Linux, for example, you can fiddle with all manner of unrelated extraneous things. This caused the different procfs implementations to all be different enough to kill interoperability making it so that POSIX was the only fallback. So, if you wanted to write a cross-platform process fiddler you would need to use POSIX or write a number of different codepaths to interface with the number of different procfses. This lack of interoperability and replication of features was probably what killed it.

Going to expressiveness: procfs is more expressive than using structs or so. It does all of what the structs do, but are more available to programs than the structs because its all some files. On 9front you're able to mount another machines procs and debug or whatever the programs running on that machine. Also you're able to use any language you want if it can read files because you don't need to use any external functions beyond the syscalls.


Because abandoning a universally-interoperable, so-simple-a-child-can-interface-with-it interface to what is otherwise a very hard-to-work-with type of system state is easier than fixing file descriptor exhaustion problems.


"This time, though, we actually have some fucking clue what we are doing."

There's confident, and then there's crazy;)

Best of luck to them:)


(sorta offtopic) Does anyone remember a cloud storage service based on 9p? At least I think it was based on 9p. Or written by plan9 (fossil, venti) enthusiasts.

It's name was Rangoon. (I think). It even pre-dated Dropbox.


That was Rangboom from 9netics[1]. Archive link because the site now serves a very narrowly-focused photographic art installation.

Their later effort was called "9p cloud."[2]

1 - https://web.archive.org/web/20170311182215/http://www.9netic...

2 - https://www.9pcloud.net/


Wasn't that brucee's gig for a while? I remember they had written a windows 9p client which is hen's teeth now, but I remember it working well.


Yep, brucee has (or had) a 9netics email address.


Thanks! I looked at 9netics too, seems it has been changed back to normal 9netics -- no longer photos/art? And copyright 2018.


"We aren't expecting abuse to be a major problem."

Oh ... dear.


Coming from 4chan, you'd think they would know better.


It's amazing what an incredible filter "you have to use plan 9" is for things. I have been doing public plan 9 services on and off for a long time, and although I'm sure someone will be a jerk eventually, I have yet to see any deliberately malicious behavior. On the grid right now, things are pretty open and the user community is all cooperative and constructive. So far as I can tell, the venn diagram of "people looking for trolling and malicious lulz" and "people interested in using plan 9" has approximately zero overlap.


That's an awful lot like security-by-obscurity.

In fact, I'm fairly confident it's pretty much the same exact thing.

If you don't build a system with defences against abuse, you will see abuse. It's a matter of scale and time.


The internet worked fine that way too, until it was more widely adopted.

As long as Plan9 remains niche enough, that approach might work.


The grid is back up, a VPS host issue occurred and a fix is underway.


Seems to be down right now, so not sure what services are implemented, but... someone should port this stuff to upspin. :)


My vps host picked the most inconvenient time possible to reboot the root node for the grid. Things should all be working again now.


After a bunch of reading, I still have no idea what this is. Some Linux variant?


Imagine if Unix was invented after the invention of things like networking and GUIs. Basically, take the core concepts of Unix, like "everything is a file" and "the only protocol is bytes", and make an operating system which religiously sticks to those principles while supporting modern, difficult-to-abstract systems.

For example, on Linux you'd load a web page by instructing a program to open an (AF_INET, SOCK_STREAM) socket, tell the kernel to connect() to some destination, send some bytes, read some bytes, and render those bytes as a page on your screen using another set of userspace libraries and system calls.

Whereas, on plan9, you might mount a remote server to a local namespace, look for the page you want, then pipe that file through an HTML renderer and the output to a display server. Or you might echo "connect 192.168.0.1!80" into /net/tcp/0/ctl, and read the result from /net/tcp/0/data.

Linux has borrowed several things from plan9, including /proc and /net, and you can even set up your system so you can pipe things to /dev/tcp. The single abstraction of files is a pretty good one for exposing all sorts of functionality to userspace, so you can get pretty good at doing just about anything without needing to drop into C or otherwise asking the kernel directly.

Disclaimer: I'm not a plan9 user, and I'm not saying that files are the best userspace abstraction.


> and you can even set up your system so you can pipe things to /dev/tcp

The bash shell supports /dev/tcp by default for file descriptor redirections (so people who are using bash can already do this!).


As noted, that's a shell-specific artefact, which makes me shudder thinking of DOS's command-specific globbing.

And not all bashes support this -- Debian (and I suspect Ubuntu, Mint, etc., based on it) disables this option at build time.

You can get something fairly close via netcat, though IMO that also shows some of the weaknesses of the approach, which is that you're not interacting with a web page, say, so much as you're talking http to a socket. Not that there aren't times when that isn't an entirely sensible thing to do, but if you just want to read stupid stuff on the Wobbly Wobbly Wibble, it's a bit too thinky.

(All of which I'm sure schoen knows far better than I, though others might not.)

The idea of talking protocol-based stuff through something that looks vaguely directory / file-ish, and which could do sensible things with HTML (or audio, video, data, messaging stuff, email stuff, crypto stuff) and through which utilities and utilities would have access without needing to be specifically adapted to handle the cases might be interesting.


But it doesn't generalize outside of bash -- and where do you think bash got the idea from?


I'm sure bash got the idea from Plan 9, although I didn't know that until recently.


Plan 9 is an amazing, inspirational, and futuristic operating system from the past: https://en.wikipedia.org/wiki/Plan_9_from_Bell_Labs

One of the things that Plan 9 can do is seamlessly treat remote resources as if they were local. From what I can tell, this is a remote grid that you can connect a local Plan 9 instance to.


Do these i/o interfaces tell you their expected latency and throughput numbers? I could see some fairly silly code being written because they didn't realize that file was actually a a bad cell phone connection.


The main author of Plan 9 was Rob Pike. He knows a little bit about networked software.

I actually had Plan 9 working on some Sun 3/50s at one point. You could use it over a 2400 baud modem... a kind of broken one where I had to listen to the connection noises and twiddle the wires at the right moment to make it connect. A bad cell phone connection would have been nice.


Honestly its not so bad over a bad cell connection as long as latency isn't extreme. I've found that throughput isn't a huge issue, but latency is a killer.


KIO does some similarly magical things e.g. click a URL to a jpg file, and it opens in your picture viewer app instead of the browser.


Supposedly KIO is what OLE was supposed to be.

Never seen it taken beyond KDE though.

Then again, KDE had/has many things that nobody seems interested in replicating. like how Konqueror could be divided much like a tiled WM, with each tile holding anything from (khtml driven) web sites, to local, ssh, smb or ftp directories, or even terminals (iirc).

One could basically create just the right tool for a local or remote file management job one needed.

Sadly i have abandoned KDE since the debacle that was 4.0...


Same here. I know of Plan 9, but have no idea what this grid thing is.

I, however, just went on to the old project's page[0] and found this:

> Q: What is the /9/grid?

> A: A project begun by users of 4chan's technolo/g/y board to setup a free, decentralized grid computing network, available to users of any OS. By integrating Plan 9 based networking technologies (via Plan 9, Inferno, 9vx, or others) with our usual OSes, we can create transparent networks of whatever content and services we want. Note that there are several different projects using the term 9grid, the resources here are only a small portion of the world of Plan 9 + Inferno based distributed systems.

[0] http://www.9gridchan.org/archive/9grid_FAQ


Some Oberon variant, by people really invested in Unix.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: