So first off: Very cool. Amazingly polished, and self-hosting! Especially since it looks to be an actually-independent project with POSIX as an optional compat layer.
I feel like the features that are unique are mostly in 3 groups:
1. Features that shouldn't be unique to this system. Tabbed windows have occasionally happened on other systems, and probably should become more common, and there's no reason it shouldn't work on other systems. I hope this inspires other people to copy;)
2. Features that only work if you have really tight integration or different primitives than most OSs are using. Programs updating file name when you rename from the file manager, or the file manager showing when files are open... might be possible to graft on to other systems, but it'd be a right pain. Even systems like MacOS are going to struggle - the whole OS is under one company that could do cross-functional stuff like that, but any time it touches applications you need external developers to support it and that may or may not work out.
3. Features that are possible, but nobody else does it because it's impractical - possibly only impractical from their starting points, though. Showing total size of subdirectories is expensive on ex. Linux because you have to walk every file recursively. I don't know if they're just eating the cost because they haven't hit a case where it matters, or if their system actually makes it cheap (I could easily imagine a filesystem that moved the calculation cost up front).
Your examples of (2) and (3) have nothing to do with the desktop, really, or, rather, they are about things that one could see reflected in the desktop if only they were feasible at much lower layers.
"total size of subdirectories" is basically impossible as long as there are hardlinks. But hardlinks are useful, so. Even w/o hardlinks, if you have anything like ZFS snapshots and clones then you have multiple kinds of "total size" to show, like "total size" of all the files below vs. total space consumed by those files in this snapshot/clone. You could have hardlinks and all of them and then asynchronously update caches of sizes, but now you waste cycles on that and those cached sizes will often be wrong. You basically can't get there without making trade-offs that kinda suck. It's just better to not bother with this particular feature.
You're assessing the feasibility here making the assumption that a polling pattern would be used but this is exactly what an event based pattern is perfect for. As long as files are modified through the os you can propagate changes upwards and only update folder metadata when required. This is what the parent meant by moving the cost upfront.
If your only option is polling you're right but we're talking abou an os from first principles here. You have the control to make sure all file changes go through the proper api and update the metadata to be as accurate as you like.
There's a lot more to it than that. For example, two links to a file in the same directory shouldn't count as doubling the space it consumes in that directory nor all its parents -- that's an easy case to consider, but there are others that are much harder.
Adding different kinds of measurements of size doesn't particularly increase the difficulty of tracking it here, just adds some more fields to the metadata and maybe some additional events.
To take an example of size data types from another thread, let's imagine you want to track 2 kinds of size: size as read sequentially (A) and size gained if deleted (B). The first cares about links but the second does not. Every folder would track the size of sub-directories by listening for size calculation events firing on the contained files and folders. A lowest level folder would listen to the events of all its files and update it's own size metadata, this triggers another event that any parent folder is listening to, repeat. A link would just listen to the event of its linked file or folder and would only indicate size updates as relevant to the size data type A. This information gets propagated upwards normal.
As you say, there's a lot of details here but I'm not coming up with any blockers that would invalidate the pattern. It's a very flexible system and I've written this sort of propagating update before with future features all being simple to add. If you can think of any blockers I would be very happy to hear them so I can design around it next time I touch a project like this.
This is exactly how I assumed modern operating systems would operate (macOS seems to operate this way because you can actually show size subtallies, but it actually polls and caches; Windows doesn’t even bother unless you get properties on a directory, which is lame; I think beOS may have actually done this?), and was disappointed when I realized it wasn’t… and still doesn’t for some reason? Why not? Couldn’t they just route all OS-level ways of writing to the storage medium through a system like this?
There's no real reason they couldn't but polling systems tend to be the first thing implemented as conceptually they are simpler and require less rigor to enforce. Switching from a polling pattern to an event pattern however is a sizeable amount of work required and you can frequently get bugs in the switch. The discussion here is in the context of a greenfield project where such concerns aren't an issue.
To be clear it's not like an event registration system is better in every way than polling. It's trading some extra hd and (potentially) memory usage for upfront and cheaper cpu costs. It is possible that whoever made the decision weighed the two options and decided on polling instead of events, and this decision was made long ago when os filesystems were first being designed and storage was at much more of a premium.
I don't see why "size gained if deleted" "does not [care about links]".
What I've learned in my experience with ZFS, Lustre, and other filesystems is that the user simply cannot get what you're asking for, not with any kind of real reliability. For distributed filesystems (like Lustre), the kind of thing you're asking for is simply ETOOHARD or ETOOSLOW. It's very easy to insist on a solution, and very hard to get one.
> I don't see why "size gained if deleted" "does not [care about links]".
Deleting links does not give you any additional storage space beyond the minimal amount taken up by the link itself.
As for existing filesystems, yeah they're going to have problems as they're built on filesystems that for the most part are polling. I'm not insisting on a solution or saying existing filesystems need to use this though, this is all in the context of a greenfield project. Switching something like linux to use an event based system instead would be a major project.
As for distributed filesystems where every file does not own all its own bits, that's just a different way of measuring and doesn't have any major problems for the concept.
I think the vector based UI should be on the list somewhere. Really smart solution to the HiDPI problem that gives you easy arbitrary scaling without any awkwardness
Low-resolution monitors and laptops remain widespread [0], and vector-based UI just doesn't look that good on those (either blurry or misaligned). I'm doubtful that HiDPI will become the mainstream standard anytime soon outside of smartphones and tablets (small screens). Cost increases quadratically with screen resolution as a function of production yield, so low-DPI hardware is likely to remain the cheaper option that many people will continue to choose.
Agreed that it’s a safe thing to say, but there’s nothing stopping an OS from putting in the work to do that, and therefore allow them to have a single vector-based UI for Low and Hi DPI screens.
Right, but e.g. SVG doesn’t support hinting AFAIK. Also, realistically nobody is willing to do the work to add hinting to vector icons. Fonts with good low-res hinting have already gotten rare.
I suspect that we’ll continue to live in a world where the UI is neither optimal for HiDPI nor (any more) for low DPI, while really high DPI (e.g. over 200 DPI on desktop monitors), which would enable going full vector without detriment, will rather remain the exception than becoming the rule.
They have, no? GNOME/KDE have supported SVG icons for years, the themes are usually vector based. I don't think there are necessarily many bitmaps in a default modern GNOME install.
For macOS the reason is simple and deliberate: it's much easier for artists to make beautiful icons as bitmaps. They can use Photoshop and not just vector editors. Given that screen resolutions increase at a somewhat slow and predictable pace it means they can just redraw icons at higher resolutions from time to time to keep up with their hardware changes. They like to change art styles anyway.
Well, they don't have it in such a way that things are arbitrarily scaleable (although things are slowly moving in that direction). If you look at the demo video you can see the entire GUI scale right up and down in real time
Like, I almost want to say that vector should be the native format for anything created on a computer, even in "paint" programs (which might for example cache a rasterized version too)
Weren't both X11 and Windows originally vector-based (as in drawn using lines, rects, and fills) UIs, but everyone moved to pixmaps because the X11 protocol didn't support anything remotely SVG-like and client-side drawing was faster?
What’s the state of the art for file systems in the age of M2 drives that are bonkers fast? I dunno about everyone else, but knowing hierarchically where my disk space is going is a really common concern.
I would imagine, the “m2 fast” is visible more in reading medium sized files than 100k tiny reads (when recursively querying directories like this case GP is taking about)
To make GP’s use case better, FS really needs an custom index or require that every write also update all hierarchy. Unless of course they work like indexers and just work with delayed data
Good point but likewise one has to wonder why the need for folder hierarchies anyway. With bonkers fast disks and cpus the organising of files should be able to be dynamic and driven by some form of metadata and tags rather than static tree structures. The web doesn’t first require you to define a structure before search so why should a desktop?
This is exactly what Steve Jobs would tell me every week when he would look at the latest Finder build. Of course, we needed multiple pieces of infrastructure to get there; a journaled and indexed filesystem, kernel-level, file system modification notifications, metadata indexing systems, etc. All of the pieces did appear, but there are also those darn stubborn users who kept insisting on the ability to navigate a physical file system to an actual file.
Many of the features of OSX (which I rarely use, but worked tirelessly on) came as a result of Steve's dislike of having to know where files were and having to navigate to them. The Dock, Spotlight, Mission Control, Expose, even Time Machine all came out of Steve's hard-to-pin-down concept of what a modern user interface should be.
I suspect it may be possible to create a fully "Smart Folder" query driven Finder experience. Your sidebar could be populated entirely of saved queries. This might be a fun experiment!
So I'd have to tag every one of the umpteen files I create (in some session doing whatever) with some spur-of-the-moment searchword I'll never remember later, in stead of just cd-ing to some directory and that's where they'll all be?
It's funny how all these anti-directory tag-and-search proponents seem to take for granted that their hobbyhorse is obviously superior. I've never seen any proof that it is, and I don't think it is.
No, I just create files. I don't have to do any tagging at all.
Or: Yes, in a way that's what the hierarchical directory structure already does for us.
In either case: So what's the use of ripping out the hierarchical directory structure and replacing it with some (other) "tagging" system???
(We already have a universal "tagging" system that automatically uses every single word in every single file as a searchable "tag"; it's called "grep"...)
Because spacetime is hierarchical in that way too. I can put a thing in a box in a room, but I can't put the thing in multiple boxes in various rooms. The filesystem thus makes intuitive sense: we store physical objects in exactly the same way.
Now sure, it'd be nice to grep through my house, but I still couldn't hang the same painting in two places at the same time.
On a related note, why a filesystem and not a database? I get the feeling that a filesystem ultimately is just a .. poor man's database. Except that it's quite crappy and there are no proper transactions, many things are impossible without TOCTOUs, many a bug (including security ones) have been due to race conditions around filesystem operations.
I guess path->{metadata,data} with no transactions is a simple abstraction, as with so many other simple abstractions, it just bites you when you try to build anything nontrivial. Then you need to switch to a real solution. Just as people sometimes start with a bunch of shell scripts before they recoil in horror and realize that they should've started with a real programming language.
ProtonMail launched with labels only. They later added folders for those of us who like to file messages under "organized now, don't want to think about this again".
Folders are so common still in part because many of us want them to the point that both Fastmail and Gmail lets you use tags in ways that behaves almost entirely like folders, including a hierarchy.
I'd never consider an e-mail system (or a filesystem) that doesn't allow me to at least very closely approximate that structure.
> Showing total size of subdirectories is expensive
To me the real obstacle isn't that it's expensive (I mean, you could ostensibly cache the size on disk for each directory, if not find some other clever algorithmic solution), but that the notion of "size of a directory" itself isn't all that meaningful in the presence of e.g. hardlinks.
Roughly 0% of a typical desktop’s disk space is used by hardlinked files. You can safely double count them. That’s exactly what every disk space analyzer does!
If you really wanna avoid double counting, just divide the size of every file by st_nlink. Of course you’d have to update the cached sizes of every directory that has a link to that inode so you’d need to cache the mapping from inodes to paths too. Another solution is to cache 2 sizes per directory, one for all files with 1 link and another for files that have 2 or more. The UI could hide the latter when it’s 0GB. But this discussion is academic; Nobody really cares about hardlinks.
> Roughly 0% of a typical desktop’s disk space is used by hardlinked files. You can safely double count them. That’s exactly what every disk space analyzer does!
While that may be a reasonable strategy, it's not what every disk space analyzer does.
I've got a backup setup that uses hardlinks to provide a wide variety of restore points without using a lot of space. du doesn't double count:
$ du -hs daily.0
436G daily.0
$ du -hs daily.1
436G daily.1
$ du -hs daily.0 daily.1
436G daily.0
12M daily.1
Not sure what you consider a "typical desktop", but on Windows, WinSxS has gigabytes worth of hardlinks. If you don't care about them that's another matter I guess.
Also note that the user will be confused when they delete the whole directory and observe 0 bytes get freed. (I guess a similar problem is also there even if you double count.)
The point is, the problem itself is-ill defined. There's no solution to that other than scrapping or redefining the problem itself. And it's hard to define the problem precisely for a non-technical user.
Then you get into semantic arguments: If a directory contains 2 1GB files, does the user care that 99% of their blocks are shared, or that just an under-the-hood implementation detail, and the user wants to know that there are 2GB worth of files in there?
Really, there should be two file sizes: "how much space this will take if I copy it to other filesystem" and "how much space will be freed if I delete this"
Well, three, "how many bytes do I get if I open it and read all the bytes out". Or maybe four, how many bytes do I get if I open it and read all bytes which aren't holes (ie. how many bytes do I need to put into an archive that supports sparse files) :)
I would expect at least one of those cases to be identical to "how much space this will take if I copy it to other filesystem" if you're asking a generic question where the target is a hypothetical and therefore you have to say how many bytes would be taken by the raw files since everything else requires knowing specific details about the target.
> "how much space this will take if I copy it to other filesystem"
This is ambiguous between "how many bytes are all these files in total" and "how many bytes does it take to store a single copy of all these files on such-and-such file system (mostly the current one)". The latter can be different because of transparent compression, which is common on e.g. BTRFS.
> Features that only work if you have really tight integration ... Programs updating file name when you rename from the file manager ... Even systems like MacOS are going to struggle
MacOS has actually had this one in the bag for a couple of decades now (likely due to their really tight integration).
Doesn't even Windows do that? Rename a file in File Explorer, and the app that has it open pops up a dialogue saying "File So-and-so has been renamed.", or some such. But maybe that's because the app has to do the work of keeping track of it (though probably via hooks into the OS / Explorer), so it's not fully automatic? Can't recall if all apps do it, think I've only noticed it in a few.
Again, it depends on how the application accessing the file is developed.
If your program just read to copy in memory then closes the file, waiting for the next save to reopen it, the OS can’t know on which file you are working. And this, i think, must be a pretty common implementation.
You can use inotify to figure out what files are opened, closed or when they are written too. Can be used for example to indicate active file downloads in the file manager.
However I don't think that's quite enough, as most apps will close the files again after they have read them. There is no need to keep them open, as you are working on their in-memory representation, not on the file on the disk until you hit 'Save'. So this seems more like a desktop level feature similar to "Recent Files", where the apps has to manually announce what it is doing instead of relying on low level file operations to figure it out.
If every application uses inotify and correctly handles the events, I expect so. Which brings us back to "even if you can do it good luck getting buy in".
> Programs updating file name when you rename from the file manager, or the file manager showing when files are open... might be possible to graft on to other systems, but it'd be a right pain. Even systems like MacOS are going to struggle - the whole OS is under one company that could do cross-functional stuff like that, but any time it touches applications you need external developers to support it and that may or may not work out.
I thought Mac OS apps could and did do just that. Perhaps it's an inconsistent behaviour that depends on the developer implementing it rather than something that comes for free by using Swift UI or Cocoa.
I think the point of the tabbed windows isn’t so much window tabbing, as the idea that the window belongs to the user and not the app.
The user manages windows and not apps.
Currently, this is being leveraged in tabbed apps, and in the ability to open a window, resize it, etc and then decide which app should show itself in that tab of that window.
But the paradigm change itself could potentially lead to new workflows, etc if creative app developers and designers really start working woth it.
Thanks for the link. Appreciate the amount of work that has gone into this, very well done to the author(s), super work.
Stunning attention to detail - font choice on install, dynamic file renaming, open file highlighting, tabbed windows with apps inside. These felt so intuitive to me as I watched. Vector based UI - excellent choice!
Watching the demo brought back memories of OS/2 and BeOS in terms of aesthetics and usability. The theme designer blew me away - it looked like Sketch or Adobe XD in terms of UI prototyping. If this offers a way to draw your UI with vector based tools and have it "attached" to the OS's API then that's really exciting.
Further, if this could be extended to a VB style application where one can code in actions, events etc in a user accessible way [1] then I think it would be a killer app for this OS where a user/developer could shape the entire OS to fit the needs of the design.
I'm thinking here of a single use OS on a USB stick - for example, a writer's OS where the whole OS/Apps just provide a rich environment for wordsmiths to work in creative isolation. Like the "Zen" mode of many editors, but taken to the next level. Glorious looking dictionaries, pinboards for notes and research etc etc. Design the UI, apps and utilities and compile it all into a single, bootable OS would be wonderful. That would be very satisfying to me the way my mind is wired.
[1] To be clear, a language which is approachable by beginners, e.g. BASIC like or some high level scripting language. Also pluggable languages would be very cool for advanced workers.
Free Pascal is already ported to many different operating systems. But to get the VB-like Lazarus IDE and component library across, you'd also have to port QT. Hm, QT itself is also already ported to many different operating systems, so...
I just watched the video and noticed the whole Vector based UI.
I know there are Vector based icons and graphics set in other OS or Desktop Environment, but is this the first time the whole UI being Vector based? Lots of interesting thoughts and experiment in modern day Vector UI capabilities.
Technically GUIs are, or used to be anyway, be drawn with drawing commands like "DrawRect, DrawLine, DrawRoundedBox", etc, which are basically vectors encoded in code and could be easily scaled. Windows has/had the metafile image file format[0] that basically encodes these calls into a stream for later reuse (AFAIK Office clip art used to be metafiles). Since these are vectors they can be arbitrarily scaled. The earlier versions of Windows even had to work not only on different pixel densities (that produced more or less the same visual screen space) but also on different pixel aspect ratios, like CGA's 640x200 mode.
However once GUIs started relying on using bitmaps for drawing parts of the theme things got a bit less vector-y. But that happened later.
Agreed, the video really impressed me. He mentioned that you can't delete files yet, I am wondering what the idea would be if you delete a file that is open in an app, change it to untitled in the app maybe ?
Very impressive, amazing amount of work went into it clearly.
This video is already included in the web page under the “Watch a demonstration of the system running on real hardware. Recorded in October 2021.” paragraph.
I do not like GNOME 3 at all myself. A lot of the features that the developers are proud of – things like CSD, no menu bars, that big empty wasted panel with no indicator icons, the clear empty desktop with no desktop icons and so on – are the specific things I do not like.
I still use Unity on Ubuntu. One of my machines has the latest version of the Unity remix on it:
https://ubuntuunity.org/
... but little regressions are accumulating. I can't empty the wastebasket from the dock any more; the volume control works but mousewheel control is now reversed (down is up, and up is down, although I have "natural scrolling" turned off); Firefox doesn't support the global menu bar any more; and so on.
It seems to me that a lot of modern desktops are removing features, or changing features (like the disappearance of menu bars), because people today don't know how to use them. Since they don't know how, they don't use them, so they feel that these features are not important, so they remove them.
Similarly, Windows 11 has now lost support for vertical taskbars. KDE has it but it's broken, as it was in GNOME 2 and now is in MATE. Cinnamon has a crude form, but it can't arrange status icons in rows, only in columns, which wastes a huge amount of space... but I suspect they've never seen a vertical taskbar, so they don't know how it should work.
So many interesting/fascinating concepts and ideas that will remain relegated to niche communities because of basically one problem: lack of drivers. This is the problem that must be solved if we ever want to move away from the dominance of a few OSes "too big to fail" and their million compromises to a thriving field where competent programmers can create new OSes as easily as now they can create new apps.
Unfortunately this will likely continue, as hardware vendors sacrifice both the environment and human rights (quasi-slavery in the mines/factories, as well as user freedoms) in the quest for profit.
On the topic of drivers and specifications, I strongly recommend a talk entitled "It's time for the OS to rediscover hardware" [0]. If you prefer a written form, "OK Lenovo, we need to talk" is a great article on the same topics. [1]
If you write your operating system to use the paravirtual devices and run inside a hypervisor then you don’t need to worry about drivers for all the physical devices you might find.
By the time you have attracted enough users that those few percent of performance you are losing on the paravirtual drivers is an issue, the extra code for some performance critical native drivers will be a small part of your OS.
This used to be the case but, nowadays, there is a second problem, which is lack of web browser. Now, in addition to deal with present and future hardware, you need to deal with present and future web technologies, or your OS will not succeed. This makes it more difficult for amateur OS designers.
All this said, it makes me very happy that some people are brave enough to keep trying.
Sure, you need to keep up with it the rest of the world. You used to have to include a C compiler with your OS. When was the last time Windows came with a C compiler?
...or you can go the other route and make a very deliberate decision that there is one specific piece of hardware that you're going to support and not be sidetracked into developing drivers for a huge array of hardware.
For example, I bought a used Sony Xperia X that was several years out of date when Sailfish OS was released. I also bought a half-decade old Think Pad because it was one of the few machines that LibreBoot supports. I actually care way more about software than about having the latest hardware or the broadest array of hardware options to choose from.
I wonder if this could be solved by picking a widely available but fairly stable platform for it - like a Raspberry Pi, and making sure it works there. Atm, it seems like it's an x86-based OS, so I'm not sure how difficult would it be to port it to ARM.
> I wonder if this could be solved by picking a widely available but fairly stable platform for it.
For buses and basic IO it might be a solution. For a broader picture, it may be not.
For example: your favorite USB device that requires a custom driver won't work (like a USB-to-Serial FTDI chip). Starting from there, now imagine giving support for every printer in existence.
I am not an expert at all, but such compatibility layers exist.
For example Captive [0] was able to reuse Windows XP binary NTFS drivers on Linux.
At that time there was no Linux NTFS driver with writing capability.
As Linux's driver model is incompatible with Windows XP's driver model, Jan Kratochvil has to code a Linux driver implementing a Corba interface [1] to dialog with a remote piece of code implementing the other end of Corba.
This remote code itself was in two parts, one was a driver sub-system borrowed from Reactos 2.3 (its kernel was similar to Windows 3.51 kernel at the time) that piloted a binary Microsoft native NTFS driver.
We already see this to a certain extent, Linux has NDISWrapper to use some Windows drivers, but my understanding is that it isn't so straightforward for most other classes of driver because the way they work has a lot to do with the way the target OS kernel works and it's hard to map from one to the other.
I keep wondering if this is as bad as it looks. Compared to the old days where there seems to be infinite combination of drivers, these days GPU, Sound and Network vendors are fairly limited.
Now that Intel has finally been forced to change its mindset. I hope they will see the need to Open Source their IP and Drivers as competitive advantage. I am not even an avid Open Source supporter, but if they need or want to balance the power between Operating System competition. That is what will be needed.
This will require a complete pivot to Fabless and IP model.
It is sort of strange to see how thing plays out, I could see the possibility of Intel and Microsoft releasing x86 ISA and Windows kernel as open source in the future.
This is insane, I can't imagine how much work went into it.
I had a similar idea as a kid, I was pissed at all the bloat consuming my cpu and ram on windows so I wanted to build my own os that would run a single app taking advantage of 100% of the hardware. I learned some assembly and managed to create a bootable floppy disk, then quickly gave up, realizing how much work a functioning os would take...
I knew x86 well from demo scene coding, and I had the Linux and NetBSD sources to help, but the hardest bit was just getting all the boot sector stuff going properly and getting the processor into 386 mode as soon as possible.
I wrote an entire OS that booted into a windowed GUI, multi-threaded, file system support etc, etc and my goal was the whole thing booting happily to the desktop in 4Mb of RAM from a 1.44Mb 3.5" floppy, which it did. Every line was written from scratch in x86 assembler, because I was a masochist like that.
I called it Tinkerbell, for reasons lost to time, and it was hosted at tinkerbell.org back when I owned that domain. I just checked archive.org but sadly they didn't grab it when it was around.
Because they do a lot more? The video demo is the happy path. It's perhaps hard to notice things like the image viewer supporting drawing things on the image but not supporting saving the resulting file, or the lack of alt-tab.
Also, Essence is basically a Win32-like system (with some very small use of C++ but e.g. using char* instead of std::string). The kernel is handling graphics and the windowing system, like it used to do in Windows. Even the eyedropper you saw has kernel mode support.
Yes you can get very efficient code this way but only at a cost of low programmer productivity / reliability / security, especially as the code scales up to more than one developer. For a hobby OS it doesn't matter. For a commercial OS it's not good enough, hence Apple/Microsoft's investment in .NET and Swift. These consume more resources but make it easier for programmers to avoid mistakes and work together.
Don't get me wrong. I'm loving the style, the panache, the clean code, the ambition. Fantastic project. But it's a bit naive to ask "why can't all operating systems be like that?". Operating systems written by one guy will inevitably be fast and light compared to an OS that's 30 years old and which has 10,000x the number of features (at a conservative guess).
In terms of general minimal-viable support, that's probably hit-and-miss (perhaps around where Linux distros were at in the early 90s - probably worked, except for everywhere it didn't), particularly with the increasing ossification and variability of BIOS/legacy emulation in current-era systems.
I remember one called SkyOS as well - it was really far along, probably killed by the lack of driver support. Unfortunately it's site seems to no longer exist.
> I remember one called SkyOS as well - it was really far along, probably killed by the lack of driver support. Unfortunately it's site seems to no longer exist.
Considering the fact that according to the German wikipedia page (https://de.wikipedia.org/w/index.php?title=SkyOS&oldid=21527...), the last version of SkyOS is from 2008, I don't believe that SkyOS was killed because of a lack of driver support, but rather because the main developer simply had to make money. But feel free to ask him directly, you now know how you can.
Spent some time reading through the networking stack, scheduler, synchronization, and networking device driver. It was a total breath of fresh air to read code in this area that wasn't a total spaghetti of years of maintenance and feature creep. Really happy to see a project like this!
I've had some classes from Sape Mullender where he showed Plan 9 source code. It was also quite beautiful. Each function fit on 1 slide and was clear enough to need no documentation.
This is an amazing piece of work, but it's clearly a labor of love: this isn't going to have any real-world use any time soon.
Where operating systems are headed is more towards security (process isolation, bulletproof input etc), not lightweight GUIs on top of thin kernels like this.
Are there any passion (or other) projects that explore this? I know about Qubes, but that's more like a heavy layer on top of a heavy duty GUI, on top of a Linux kernel.
Spectrum OS, meant be a more usable upgrade from Qubes. Based on NixOS. Currently stuck on plumbing problems. https://spectrum-os.org/
Bheem OS, "a next generation secure operating system." Inspired some by Spectrum. So new they can't keep their blog online. Here's a snapshot of a recent blog post about the security features https://blog.openw3b.org/crosvm-for-os-and-app-virtualizatio...
the main problem I ran into with qubes is that having a xen hypervisor and an nvidia desktop graphics card in use (with proprietary nvidia drivers in use for proper performance) seem to be mutually exclusive. a xen dom0 needs to use the host system RAM in some way that causes kernel panics and crashes when the nvidia DKMS driver is loaded.
I would wager that 99% of xen related development is intended, as it should be, for dom0 server environments that will never have a keyboard, mouse or 3D capable video card plugged into the bare metal.
The reason isn't so much that xen doesn't support it but more so that graphics cards are not very well intended for isolation. Correct me if i'm wrong but i don't think you're supposed to compute 1 thing on a gpu and expect another thing to not get access. In recent time Qubes has made progress on making a GPU VM where you can compute on a secondary GPU, but it only works for amd currently.
I use nvidia every day with qubes, but just for display output. I sometimes see memory leaks where it will draw screen buffers from booting alternate OSes on another drive.
yes that as well, I would doubt highly that the people at nvidia writing the driver for their pci-e desktop graphics cards (in my case was an NVS 510) are putting much consideration into things like xen. the card hardware and driver design is intended for single-user environments...
> this isn't going to have any real-world use any time soon.
If it was ported to ARM and had a decent GUI interface builder, it could become a killer OS for making interface panels for appliances. Some manufacturers are repurposing Android for this task, however this forces them to use much powerful hardware, and their UX is still laggy.
Agreed. I do think there is plenty of real-world application for Essence. I have some professional experience in this space (forth, embedded SBCs, micros) and a number of times designs got rejected on production costs because of the need to have hardware specified to the needs of the OS/GUI and not the small control program running in tens of KB of memory.
On first glance, Essence looks like it makes efficient use of hardware resources and, as you say, an ARM port makes sense. I can visualise an OS version of web packer type tools where you rollup your app/gui and just enough OS to run them on the target. I can say that people who I've worked for would pay good money for that sort of kit. Not suggesting closed source, but in terms of the usual caveat to management - paid support, open source longevity etc.
The easiness to hire contractors to take care of programming the respective apps wins over the possibility of cheaper hardware, that is why they are moving into Android and not a POSIX OS with Qt or something like that.
There was SubgraphOS which used namespaces, but it seems pretty dead.
Qubes is the best and most currently complete thing i've seen (In the last couple years its also made progress supporting GPU domain). Its not perfect and has some controversial opinions like default passwordless sudo in VMs, but it is still far ahead of most distro's security.
For me Qubes 4.1 performs well though I have it on an m.2 and have plenty of ram. It boots quickly, with the only seriously slow thing being VM startup. I'm able to do all my dev work and use Windows for work related things on it as well as have disposable browsers for banking and sketchy sites.
> There was SubgraphOS which used namespaces, but it seems pretty dead.
Knowing nothing more than what you've just told me: That doesn't sound like a bad thing, IMO; if you're using that kind of system, you probably care a lot about security, and I'm not sure I'd trust a shared-kernel system against malicious code. (I appreciate that other people may make that trade-off very differently)
" There are lots of frameworks out there for making websites. You've probably encountered them. Well, Qbix is different. It was designed from the ground up to power social apps.".
To me it looks more like a web app than an OS.
> I know about Qubes, but that's more like a heavy layer on top of a heavy duty GUI
Actually Qubes has relatively small CPU overhead due to its utilization of hardware VT-d virtualization. RAM usage is huge though. Source: using it as my daily driver.
I wonder if I can have a Qubes VM with Essence working.
it's neat. there's like a whole indie os dev scene these days. kinda like when people used write os gui shell fanfiction, now they're writing their own novels with kernels from scratch and everything.
AFAIK this was a thing for a long time and not something recent. IIRC Bochs for example was written by someone making their own OS and wanting a way to debug it.
The "open a window and then put an application in it" thing reminds me of Plan 9; in that case, IIRC, you open windows and they start as a terminal, but if you execute anything else the new program just "inherits" the same window.
Slightly similar in practice, but very different concept.
In Plan 9, every process draws interacting with a screen file (in fact, it is a whole filesystem, also including keyboard, mouse, control files, ...). What rio, the Plan 9 window manager, does is to "multiplex" this screen file (again, file system) so that each process has its own screen which does not correspond to the physical monitor, but to a window.
An uncommon feature that derives from this design is that you can run a new rio instance inside a rio window. More important, you can just connect to other machines and interact with their windowing filesystems, getting network transparency "for free".
there is definitely a place for operating systems like this one in the future. the author is creating something many of us want: a system where the user is in control of their computer! fancy that! imagine not having to go retro but having total control of a modern machine.
Very cool. Nice that it runs without all the processes... But that will come at a certain point haha.
I like that they implemented composing tabs between different applications. This is the way I try to organize work/projecta/projectb/personal use, but it never works with current osses.
Spaces in macOS is actually kind of useless, because certain applications have windows across screens (finder, chrome).
Tabbed Finder windows are the most useless thing ever.. It's Apple listening to HN haha.
> Tabbed Finder windows are the most useless thing ever..
I'm glad I'm not the only one. I had to switch off of Nautilus because it kept trying to make me open things in new tabs instead of new windows. The thing is that I don't use the file manager much and 9/10 times I want to open directories is because I want to move files between them or less often visually compare which doesn't work well with tabs.
The state of file managers is really annoying. I'm currently using Caja, which is a Nautilus fork for MATE, so basically Nautilus frozen in time before they yanked the spatial bit and destroyed a bunch of the rest of the usability too, and it's better but still then hampered by just undoing the latest generations of damage done to it, not trying to do better - it was never great.
To start with, I wish all of these would learn the lessons from E.g. AmigaOS of separating the "serious file manager" (which would tend to be e.g. Directory Opus or Disk Master II) from the launcher / desktop (e.g. Workbench). Though Directory Opus could be used as a Workbench replacement.
Secondly, I wish they'd all learn from the "middle ground" Workbench offered as a spatial-ish launcher: Workbench does not force only one open window for a given directory, so it's not fully spatial. It also does not remember the locations of everything unless you make it. But it does let you lay out directories spatially on the desktop and "snapshot" their locations and the locations of the files within them. And that makes it far superior as a launcher because you can organise your most used directories etc. for easy access. I get why people hated on a fully spatial launcher, but the tradeoffs of Workbench gave it the main benefits people who like spatial launchers/file managers tend to be looking for without the downsides people who hate them tend to hate the most - e.g. you can use Workbench without ever being affected by it's ability to snapshot icon positions or window positions if you prefer.
These don't make it a good file manager, but for that a multi-pane design is far better anyway and trying to shoehorn the full flexibility of a good file manager into a launcher is a folly. Dual pane + a command palette is a start, but e.g. DiskMaster II is the one I like best, because it lets your fully configure the layout with arbitrary file listing windows + arbitrary command palettes. For a file manager with that flexibility also having the option to open in tabs would be great, as long as it can be turned off.
The result of insisting on trying to combine the two seems to usually be something which is bad at both (though nothing really prevent you from making something that reuses most of the functionality to let you configure it to work like both a good launcher and a good file manager, it seems temptation then quickly turns to trying to merge the two modes)
edit because forgotten: which in turn is a fork of an early version of https://wiki.lxde.org/en/PCManFM and several 'mods' thereof.
Anyways. For me it's really usable, even without too much fiddling. Stays out of the way. Doesn't need much RAM and can handle anything I throw at it, which may not be 'Data-Hoarder' territory exactly, but I do have large Tsundoku-stacks which require work to shrink them, and the usual assortment of movies and music, pictures, sources, and so on.
I'll take a look, but SpaceFM seems to suffer massively from exactly the attempt at combining a launcher and file manager and ending up being bad at both that I lamented above... It also doesn't seem to support spatial-ish browsing (at a minimum the ability to remember window locations and icon positions and allowing manually changing them), in which case it's a non-starter for me as a launcher. Maybe it's better as a file manager, but it still feels like ca. 1985 Diskmaster 1.3 in terms of capabilities (which, to be fair, doesn't make it any worse than most Linux desktop file managers; just not better either). It'd take a huge step up in zzzfm to turn SpaceFM into something worthwhile for me.
I don't use finder tabs much. I just use them to create a new tab, when I need to copy something inside a directory without losing my current "state".
I tend to use finder tabs very differently than the browser tabs where I can have
many different websites, but later I tend to forget why I have those there.
I'm glad that MacOS can restore everything pretty well after a logout/login. A major focus of OS should be on managing these "work state" through new UI paradigms/tools.
edit: typos
> I like that they implemented composing tabs between different applications. This is the way I try to organize work/projecta/projectb/personal use, but it never works with current osses.
This workflow works very well for me on Win10. I can easily separate work/hobby stuff by using different desktops.
But wouldn't this also run on a laptop that is much older? My pentium 4 computer from when I was not much older than my daughter had much better specs than the minimum they need here. I'm sure there are some other limitations though.
Curious, is porting complex C++ projects to arm64 just a matter of updating build/CI systems or does it actually involve changing core code and adding branches if (x64) {} ... if (arm64) {}.
For common programs it's not very difficult, usually just recompile. But if it uses assembly then you are out of luck. Also if it requires a library that uses it.
For an OS it is different, you need to reimplement many parts: bootloader, drivers, task scheduler, virtual memory...
x86 and ARM don't same the same memory consistency model, so while managed languages do somehow protect their users from the differences, languages like C++ do expose it by default (unless you make use of the C++11 memory model APIs[0]).
So it depends how clever the lock-free kernel datastructure were written and other memory access assumptions.
QNX, V2OS [1] (written in x86 assembly), SkyOS to name a few. I mention QNX because I remember running it from a floppy in end of 90s. Unix system with GUI, from one 1"44 floppy.
In the 00s, the website OSnews often covered all kind of alternative OSes (how I came across SkyOS).
I love minimalist stuff, but I don't really believe this is a good idea. It's not compatible with a lot of things.
Sure it's cool, but the problem will always be the lack of open hardware. Personally I will always be more curious about smartphone OS made from scratch.
The only think I want right now, is to use old smartphones with a lightweight OS.
Why I see a new OS I always like to see what it brings to the table, specially if they take the effort of not being yet another POSIX clone written in C, without anything else to brag about.
So it is nice that it makes use of C++ and there are some frameworks already like the GUI stack, however this is missing from the website.
For the kernel, or for the user-mode applications? C++ features aren't very useful for kernels. Presumably the author is more familiar with C and "C with classes" C++ and someone can rewrite the user-mode apps in modern C++ later.
Both. SerenityOS, includeOS, Managarm used Modern C++ for Kernel. For example, Managarm used coroutines and Templates also RAII stuff like lock_guard in it's kernel.
It is foolish to assert that "C++ features aren't very useful for kernels". C++ features are useful for programs, particularly big programs. Kernels are big programs.
Certain people have been doing this sort of thing forever. It was never a good idea. Whenever you probe why they thought it was a good idea, all the reasons (where they are expressible at all) turn out to depend on falsehoods. Most usually, though, it amounts to laziness about learning anything new.
You can see effective use of Modern C++ in SerenityOS, to excellent effect.
This is part of the handmade software movement. An effort to focus down on elegant software development using primitives rather than generic libraries and testing the status quo. Exciting to see this project move along!
These days it's easier to create an OS from scratch than a web browser.
This is an excellent observation, and gives me an excuse to recommend Alan Kay's seminal OOPSLA 1997 keynote "The computer revolution hasnt happened yet" -- the link to the whole thing is below, but I've set the time at the start of what I think is the immediately relevant point (he takes a minute or so to explain)
That’s actually pretty scarry. Someone can write and entire graphical operating system, including filesystem and everything, but that same person have no hope of writing a browser to go with it.
There’s almost no commercial insentive to create a browser/rendering engine (Opera couldn’t make it work), and it almost to much work for an open source project to take on. As sad as it might be, Google won and Chromium will be the final browser, everything else is just customization. At least Microsoft had the courtesy of slowing down progress with Internet Explorer, allowing others to more easily catch up.
The SerenityOS people are implementing their own browser. It's nowhere near usable for the modern web yet, but it's slowly chugging along, inching closer.
Of course the difference between building an OS and a browser is that for an OS, you just have to build something that is usable, how you get there and what that looks like is up to you, and you can really get creative with it and break norms.
For a browser engine, by definition, you have to build something that behaves basically exactly the same as any other modern browser. So you're implementing a gargantuan spec to a very high level of precision (since you have no control over how it's used), and the end result is a technical feat that's only impressive in the fact that it was done, not in that you get to use it now, and it does anything a reskinned chrome wouldn't do.
Lately I was tryong to understand the exact reason "why". Why are modern browsers so ridiculously complicated? Rendering a (albeit confusing) content is a PDF-like kind of job (or am I wrong?) What is the exact complexity here (except for JS compiler, which is again just a compiler)?
There are hundreds of standards spanning thens of thousands of pages. Some of them are obsolete, some of them are not, most of them interact with each other in complex and non-obvious ways.
The methodology used there is horrendously bad. Drew's smart, so it's hard to conclude that this isn't intentional just so he can pump up the numbers and tell cute stories like the one about Wikipedia's list of longest novels. (Not to mention, an overabundance of specification of correct behavior isn't what makes implementation hard, it's trying to match the undefined behavior that everyone else follows without having a spec that makes things hard.) Any serious attempt to measure the scope of what browsers actually implement is pretty straightforward, so there's no real excuse: you start with the the HTML5 spec and then go from there.
Thanks.
It looks like complexity increase deliberately aimed at maintaining market monopoly. I can't rationally believe than tens of thousands of pages of documents are needed to display just a (however complex it is) UI. It's a UI engine, after all, and nothing more.
It is not merely "comparable". It is in fact hundreds or thousands of times more complex, as it turns out (and it still consumes OS API's). I don't find it adequate.
No, Haiku is basically a carbon-copy of BeOS. Still, Essence's BeOS vibe comes from the tabbed windows and the strong emphasis on single-user computing.
There are a lot of experimental/hobby OSs coming out recently, and it's great to see new ideas. I'd love to see more experimentation with stable driver API/ABIs. Decoupling drivers from the kernel might improve hardware support, because those with deep hardware knowledge could come in, write a driver, and then no one would ever have to update it just because of kernel changes.
Is this OS a fork of SerenityOS (also completely written from scratch in C++)?
I'm amazed how many single person or small team developed operating systems are out there. Another one I like is RedoxOS (written in Rust) and Resea (microkernel written in C). Also, there's KolibriOS (written entirely in Assembler).
This type of comment is getting so old I wonder what can be done about it. Maybe the reason why we don't have "different things" is because users of these so called safe languages are too busy arguing online about how safe their programs are instead of actually writing anything worthwhile. Meanwhile operating systems will continue to be written primarily in C/C++ for the forseeable future
In case you check what I'm doing with my safe language you may be surprised. That's something new and worthwhile. Not an OS unfortunately, I would be happy to work on that but I can't live from that.
> Meanwhile operating systems will continue to be written primarily in C/C++ for the forseeable future
And we will continue reading about lifes ruined by bugs and people killed by exploits. There is no sustainable future with unsafe foundation.
Rust is an ugly language that is not fun to use. There is one OS written in it (Redox) and it's progress is very slow. It's actually less work to audit C++ code manually to find memory unsafety than it is to use Rust and get that safety "for free". Sorry to say these things out loud.
Zero-days only show that someone wasn't willing to put in the effort to audit the C++ code. If they won't do that, how will they put in the effort to write it in Rust?
This reminds me of an OS built with asm for Dell desktop PC’s around 2005-ish. The DE had most things you’d expect from a Linux distro at the time, including true transparency (IIRC), and the whole thing fit on a floppy. Saw it in a Linux mag at the time.
Not GP, but just was about to skim the sources and changed my mind upon seeing GitLab: its UI is sluggish to the point where FF occasionally asks whether it should stop JS execution, at least with larger projects.
What is a "desktop operating system"? Why do you need to reinvent operating system when all you create is a (rather traditional looking) desktop environment?
> Why do you need to reinvent operating system when all you create is a (rather traditional looking) desktop environment?
Because a lot of the things they're doing are vanishingly unlikely to work on an existing system. (I discussed my breakdown of whether features could be portable more above: https://news.ycombinator.com/item?id=29952283)
Microsoft has been reinventing their GUI constantly starting at the API level. Maybe they 'll get it right after a few more tens of billions of dollars. Meanwhile when I open the file manager in Windows 11 with Microsoft's own dark theme, it flashes white before it settles. I guess it is hard to get it right when you are swimming in an ocean full of your own excrement, and even harder when you have other priorities, like inventing more garbage protocols on top of your "Chromium: Reskinned" browser to make it harder for me to select a different default web browser.
"Essence will happily run on low-powered hardware. It can take less than 30MB of drive space, and boot with even less RAM"
which is pretty cool. ESP32s already have 8MB of RAM and access to SD cards. With 16MB could it run this? That would be amazing.
It's in a very interesting little spot between no OS (ala Arduino / Espruino), and an OS like Armbian, which is just huge really -- too big to fit in a microcontroller properly.
that is fine, but one these projects cross some point, the creators have the tendency to forget their motivation being selfish in the first place, and try to promote their creation as if what they had in mind was to solve problem for other people.
"labor of love To scratch an itch To explore an idea"
that by no mean is a bad thing, it just means doing things for themselves, to express, to learn etc, typical of creative work.
People do that all the time. However in OSS there is this thing when people feel they need to present their work as useful to others eventhough clearly what drives their work has nothing to do with other peoples' needs, it's just their itch.
> the original motivators behind "Open Source" is scratching your own itch.
that's what I said, and you repeated it, as if it's some new insight or supports some argument.
Is there not an issue in OSS where thing that should not be used in commercial context, are used in commercial context, they break, then people blame the adopters for expecting too much from OSS? Rarely the other side of the coin is addressed which is creators (not all of them) went out of their way to promote adoption of these things.
Here is an analogy: art design chair sold as office work chair. Of course if you are smart you would not buy them because over time the ergonomic would kill you. But that doesn't mean no one would. When some gullible people buys these chair, you say: idiots. However, I bet you also think about the people who made these chairs paid for TV commercials that never show the ergonomics of the thing, and question their responsibility.
You are saying a different thing now. Your previous comment touched a philosophical matter. Now it's about using, in a commercial product, OSS software people develop in their own time and that they might promote too much (I read this as "irresponsibly"). Both of these do not apply only to this kind of software and even if they did, what is the point of having this discussion (which could be an interesting one) here? If anything, we should encourage people that choose do this sort of work in this day and age.
they can, but don't have to, again there is nothing wrong with doing thing for yourself. Saying people always have others' interest in mind when doing side project does not reflect reality. Many of these projects are built with minimal feedback, so if they solve other people's problem it's by coincidence. I'm talking past the project in discussion of course, I don't know anything about it.
Why would you write a novel after Dostoevski? Why would you write a song after Chico Buarque? Why anybody would pay football after Pelé? And so on and so forth...
I feel like the features that are unique are mostly in 3 groups:
1. Features that shouldn't be unique to this system. Tabbed windows have occasionally happened on other systems, and probably should become more common, and there's no reason it shouldn't work on other systems. I hope this inspires other people to copy;)
2. Features that only work if you have really tight integration or different primitives than most OSs are using. Programs updating file name when you rename from the file manager, or the file manager showing when files are open... might be possible to graft on to other systems, but it'd be a right pain. Even systems like MacOS are going to struggle - the whole OS is under one company that could do cross-functional stuff like that, but any time it touches applications you need external developers to support it and that may or may not work out.
3. Features that are possible, but nobody else does it because it's impractical - possibly only impractical from their starting points, though. Showing total size of subdirectories is expensive on ex. Linux because you have to walk every file recursively. I don't know if they're just eating the cost because they haven't hit a case where it matters, or if their system actually makes it cheap (I could easily imagine a filesystem that moved the calculation cost up front).