Is it me or is it ironic they're talking about wealth concentration then immediately following how they've donated $8 million and plan to donate half their remaining wealth in the next 5 years?
> Device drivers are a huge obstacle for any fledgling OS.
I've wondered if new/hobby OSes would fare better by starting out targeting a popular single board computer like a raspberry pi? A mostly fixed set of hardware to make/get drivers for and test your system on.
I think the path Serenity took is the better one. Initially targeting QEMU as the single supported platform. You have the same advantage as far as targeting a single platform for drivers but contributors don't need to buy additional hardware, can develop using the platform/tools they are accustomed to, starting instances is faster than rebooting hardware and no need to deal with the issues of remotely debugging. Targeting a specific SBC as a 2nd platform after a certain level of stability is reached is probably a good idea.
QEMU is a fixed set of hardware. And far easier to target than a Pi.
The founder of SerenityOS created it as therapy and a pure “happiness” project. I am not sure actually using it was a real goal. So, he did the stuff he found interesting. That led him to writing display servers and web engines and crypto libraries and away from “real” drivers. He wrote his own C/C++ standard libraries and userland utilities but only enough driver code to make QEMU happy. It only ever ran in a VM on his Linux desktop. In the end, he found the web browser more interesting than the OS it was created for.
Very different project from Linux where what Linus wanted was an OS for his own computer. Linus was happy to leave the userland to others and still sticks to the kernel even now.
> In the end, he found the web browser more interesting than the OS it was created for.
To be fair, his career was heavily focused on browser development before he ended up in a period of unemployment (I can't recall the exact circumstances), at which point he developed SerenityOS as a means of meditation/to give him purpose.
He still works on the OS, he's just more fulfilled working in a realm he specializes in and has pivoted focus there.
You can follow his monthly SerenityOS YouTube updates leading up to the Ladybird announcement for a more detailed rundown.
That implies AArch64 support which many hobby OSes don't have, usually because the introductory osdev material is written largely for x86.
But yes, raspi is a good platform if you are targeting arm.
As I'm also designing an OS, my biggest piece of advice for anyone seriously considering it is to target two archs at once, in parallel. Then adding a third becomes much easier.
Raspberry Pi has a bizarre boot sequence and bringup process, much of it which is not open and not implemented in open source code. I think it's probably not a great platform for this sort of thing, despite it being decently well-documented.
(And even then, its USB controller, for example, has no publicly-available datasheet. If you want to write your own driver for it, you have to read the Linux driver source and adapt it for your needs.)
For anyone that hasn't fallen into this rabbit hole yet it's a good one: raspberry pi started out as a kind of digital billboard appliance, so they chose a GPU with efficient 1080p decoding and strapped a CPU to the die. On power up the (proprietary) GPU boots first and then brings up the CPU.
That's as far as I got before discovering the Armbian project could handle all that for me. Coincidentally that's also when I discovered QEMU because 512MB was no longer enough to pip install pycrypto once they switched to Rust and cargo. My pip install that worked fine with earlier versions suddenly started crashing due to running out of memory, so I got to use Armbians faculties for creating a disk image by building everything on the target architecture via QEMU. Pretty slick. This was for an Orange Pi.
The "color gamut" display, as you call it, is a GPU test pattern, created by start.elf (or start4.elf, or one of the other start*.elf files, depending on what is booting). That 3rd stage bootloader is run by the GPU which configures other hardware (like the ARM cores and RAM split).
You could probably skip some of the difficult parts if you bring in an existing bootloader that can provide a UEFI environment (it's how Linux & the BSDs boot on ARM Macs). But Serenity is all about DIY/NIH
RISC-V is the new hotness but it has limited usefulness in general purpose osdev at the moment due to slower chips (for now) and the fact not a lot of ready-to-go boards use them. I definitely think that's changing and I plan to target RISC-V; I have just always had an x86 machine, and I have built some electronics that use aarch64, so I went with those to start.
Kernel is still in early stages but progress is steady - it's "quietly public". https://github.com/oro-os
But it's not. Over time they've revised the SOC (processor) and gone from 32 to 64 bit capability. The latest - Pi 5 - has totally re-architected the I/O subsystem, putting most I/O functions on their RP1 chip and connecting that to the SOC using PCIE.
And as already mentioned, the unusual boot sequence: The GPU takes control on power up and loads the initial code for the CPU.
All of the OSs I'm aware of that run on the Pi depend on "firmware" from the Raspberry Pi folk. Looking at the files in the folder that holds this stuff, it's pretty clear that every variant of the Pi has a file that somehow characterizes it.
> All of the OSs I'm aware of that run on the Pi depend on "firmware" from the Raspberry Pi folk. Looking at the files in the folder that holds this stuff, it's pretty clear that every variant of the Pi has a file that somehow characterizes it.
That's not very different from depending on the BIOS/UEFI firmware on a PC; the main difference is that older Raspberry Pi didn't have a persistent flash ROM chip, and loaded its firmware from the SD card instead. Newer Raspberry Pi do have a persistent flash ROM chip, and no longer need to load the firmware from the SD card (you can still do it the old way for recovery).
> And as already mentioned, the unusual boot sequence: The GPU takes control on power up and loads the initial code for the CPU.
Even this is not that unusual; AFAIK, in recent AMD x86 processors, a separate built-in ARM core takes control on power up, initializes the memory controller, and loads the initial code for the CPU. The unusual thing on Raspberry Pi is only that this separate "bootstrap core" is also used as the GPU.
> Even this is not that unusual; AFAIK, in recent AMD x86 processors, a separate built-in ARM core takes control on power up,
Good point. I think most modern motherboards (and all server boards) have a "management engine" that prepares the board on power up and even serves some functions during operation. I believe that that's what supports IPMI/iDRAC even when the host is running.
I don't think that changes the situation WRT the variety of H/W for the Pis through their history and even at present.
I've also argued in favor of that; I don't actually like Pis personally, but they're a super common, cheap enough, easy to acquire system and that's huge.
Well unless they want to target PowerPC and make interested parties buy a Raptor Talos workstation what else is open enough for you? (Actually I would support this) Are there RISCV systems that are blobless?
I don't know where this idea that the RPi has good hardware documentation comes from. One glaring example is its DWC USB controller. Sure, it has a Linux driver that is open source, but its datasheet is not publicly available!
So if you want to develop your own driver for it, you have to second guess its documentation by reading at the driver's comments. This is bad.
What do you mean by documented? Sure we have a general idea of how stuff works, and some implementations can even serve as a reference but almost nothing is documented in an official sense. Your average Chinese SBC is much, much better documented, in the sense that the SOCs are at least officially documented. The Broadcom soc isn't.
I think the replies to this post may be missing the point? AIUI The raspi CPU drivers being closed makes it actually pretty hard to write an open driver for it. So you would need raspberry pi or their CPU supplier to write the driver for you, which they wouldn't do for a small OS. It took multiple years to support raspi 4 in mainline Linux and AFAIK raspi 5 still does not have a fully functioning mainline driver. That's why Raspberry Pi OS exists. You would pick a CPU that has open drivers because it would be easier to write your own for a different operating system.
Sure, that's one of the reasons I don't like them. But AFAIK that's not an impediment to running a custom OS, so I think for a lot of projects the tradeoff is good.
It's not so much the proprietary blobs, as the complete lack of documentation and debuggability for the peripherals. The PC platform, and several other SBCs, are either well documented, or at least give you the possibility of obtaining hardware with documentation.
That, combined with general flakiness (eg, power delivery issues for peripherals on older pis), and you end up with users blaming the software for hardware issues.
Probably not the raspberry pi as it is one of the less conventional SBC in terms of booting and while its hardware is more documented than ever, it's still a less documented Broadcom custom chip.
If it did get implemented, maybe it would force companies to look towards software efficiency instead of hardware upgrades every x years, in the same way covid forced companies to test work from home when before it was "impossible"?
Sadly, they have parted ways at this point. Not only has Ladybird broken off into an independent project but it does not consider SerenityOS a target platform anymore.
Ladybird is slowly shedding a lot of the “home grown” Serenity layers and replacing them with more mainstream alternatives.
As I am primarily a Linux user, I am excited to see Ladybird become a real alternative on Linux. However, as a fan of SerenityOS as well, I am sad to see all the energy and innovation that was going into Ladybird get stripped out of SerenityOS.
Ladybird has a very large political aim: to become the only browser that isn't funded by Google or based on Google's browser engine. The reason it left behind SerenityOS is because it has moved from a hobbyist aim to a very serious political aim.
Ladybird aims to build a true new browser engine bit it's using big Google libraries like ANGLE and Skia to do it I don't know it's really fair to frame it as escaping Google completely like that.
i'm not sure why you are being downvoted -- i can see angle as just a platform compatibility layer (in the same category as qt, which ladybird also uses), but skia (vector graphics) is definitely an important part of a rendering engine, which, coupled with the insane complexity of skia's implementation, does appear to jeopardize the whole 'web rendering engine from scratch' aim.
Well, the "engine" is pretty much from scratch but the project no longer believes in "being responsible for everything ourselves".
The goal with Ladybird is to be "independent" in the sense that nobody influences their decisions. However, they have no desire to be independent from a technology stand-point (other than being Open Source). In fact, they are talking about moving to Swift as their core language.
> The goal with Ladybird is to be "independent" in the sense that nobody influences their decisions. However, they have no desire to be independent from a technology stand-point (other than being Open Source).
Wouldn't a hard fork Blink instead of LibWeb still achieve this goal?
You should say only major browser that fits those categories because examples of the latter exist- Orion uses WebKit and Zen uses Gecko- and I imagine the former is even more common.
Sure but would they? Currently they get it totally for free. If they had to finance the development themselves then it would get real hard to justify real quick. $20bn is a lot of money even for Apple
It's not about whether or not Apple have the resources to make their own browser engine, it's about whether it makes sense from a business point of view to make their own browser engine. Currently it does, because Google pay them huge amounts of money to do so. But what business case would there be to pay that $20bn themselves if Google did not fund them? Would it be worth that just to avoid Chromium?
Tbf - they don't pay for WebKit, they pay to be the default search engine. If Apple wanted, they could switch to Chromium and still have the same captive audience and bargaining power (but a lot less control of the direction web standards go)
That’s not necessarily true, even Microsoft has its own tweaks of Chromium:
> We’ve seen Edge adding some privacy enhancements to Chromium pioneered by Safari. Edge shipped those, but Chrome did not. And as more browsers start using Chromium and large companies will work on improving Chromium, more of these disagreements will happen. It will be interesting to see what happens.
Just because a browser is based on Chromium, that does not mean it is identical to Chrome and that Google is in control. Even if the unthinkable happens and Apple is forced to adopt Chromium, that will only ensure that Google is not the only one having a say about Chromium and the future of the web.
Fwiw, I agree its problematic to lock down phones the way Apple does. I won't use them because I'm not buying a device where I don't get to decide what runs on it.
And for sure they would put their twist on Chromium, like Edge or Brave or Vivaldi.
I still think they have a lot more control the way it is now, for better or worse
This is insipid. Why would Apple adopt a fork of WebKit when they’ve been using WebKit just fine for so long? Why would Apple of all companies defer to something in Google’s realm besides search? Do you have a single technical justification for Apple to overturn decades of WebKit use that’s baked into its frameworks and its control over iOS to use Blink?
IE was too long in the tooth, Microsoft was behind by several trends at that point, mobile being one of them. Don’t think the situation with Safari and WebKit is comparable.
As a small correction that somewhat matters to this hypothetical, Microsoft had already moved away from Internet Explorer/Trident to Microsoft Edge/EdgeHTML. It was quite competitive and modern already.
So, they did not "move away from IE to catch up". They "dropped the Edge engine in favour of Blink (Chromium)". It feels very much like Microsoft just did not want to compete on the engine (run-to-stand-still) but rather just on the feature set. Who can blame them?
If you think about why Microsoft really switched, I think it is a fair question why Apple would not just do the same thing. I mean, as long as WebKit is the only engine allowed on iOS, it makes sense for them to control it. But as regulators force them to open that up, and perhaps put an end to the Google gravy-train, I think it is a fair question why Apple would spend that much money on a web engine when they do not have to.
You cannot fall behind the competition using Chromium as a base, because they are all using it too! It is the ultimate in safe corporate options.
While the Apple-Google rivalry seems to have waned compared to a decade ago, I just don’t see Apple completely capitulating their platform/browser engine like Microsoft did.
Not to mention even if Apple switched to Chromium, they’d just end up taking over that engine, even forking it later down the road:
> We can only imagine what would have happened if Chrome kept using WebKit. We probably ended up with a WebKit-monoculture. Trident is dead, and so is EdgeHTML. Presto is long gone. Only Gecko is left, and frankly speaking, I will be surprised to see it regain its former glory.
But Chrome did fork, and today, we can also see similar things happen in Chromium. I don’t expect somebody to fork Chromium, but it could happen.
We’ve seen Edge adding some privacy enhancements to Chromium pioneered by Safari. Edge shipped those, but Chrome did not. And as more browsers start using Chromium and large companies will work on improving Chromium, more of these disagreements will happen. It will be interesting to see what happens.
Just because a browser is based on Chromium, that does not mean it is identical to Chrome and that Google is in control. Even if the unthinkable happens and Apple is forced to adopt Chromium, that will only ensure that Google is not the only one having a say about Chromium and the future of the web.
And that is what is crucial here. The choice between rendering engines isn’t about code. It isn’t about the rendering engine itself and the features it supports or its bugs. Everything is about who controls the web.
There are plenty of scenarios which can be discussed in detail which have no possibility of coming to pass. Zombie apocalypse fiction, for instance.
I never had any beef against Ladybird. To bring this conversation to full circle, I merely clarified there are at least a few other promising new indie browsers that don’t use Chromium. In the event that Apple does abandon WebKit- which wouldn’t mean the termination of the project anyway!- I would simply use one of those alternative browsers.
Edit: while we are on the subject of wild hypotheticals, there’s also the DOJ suggesting Google split off Chrome into its own company for antitrust.
Safari is now the browser that is lagging the most behind. And it has not gotten better recently either.
Apple even got into "AI", so I would not put it beyond them to kill a browser team.
As per my reply to the sibling comment, I don’t think Apple is anywhere near to the capitulation that Microsoft was when it came to abandoning their browser engine.
I just with it retains the "hobby project with real programming practices first" vibe and not get carried away with the anxiety to compete with big Browsers.
Yes, I too want a third browser alternative. But if they sacrifice code quality for getting there fast, it will end up with the same fate as Firefox.
It's not quality they're sacrificing. SerenityOS is built on the idea of rejecting anything "not invented here". Basically it's from-scratch on purpose. Ladybird by contrast actually has the goal of being a real usable viable independent browser. So they're removing a lot of the home-grown Serenity stuff and replacing with open source libs. For instance they just removed the their home-grown SSL implementation and replaced it with OpenSSL. Likewise with their graphics layer they adopted a mature backend which now supports WebGL as a results. Ladybird's network stack is based on Curl these days I believe. It's about using solid public open source libraries as the foundation instead of having to be experts in every niche part.
They are vastly improving code quality by replacing first party libraries with third party ones. For example all the encryption libraries, media codecs etc. create a needlessly large attack surface (and general correctness surface) if you roll your own. That's why some of the first things they've started using are FFMPEG and OpenSSL.
I wonder if this is like the social media equivalent of the sims? Or a soap opera. Log in to see if jimmy works up the courage to ask out rhonda. Will ashley find out about the secret william told you? It's like a westworld where you're "safe" to interact socially in any way you feel, because they don't feel anything. As long as you click a few ads a day to keep the lights on.
He's had a lot of starts and stops on making a next game. I think the pressure from fans that think he has a midas touch has prevented him from getting very far in anything.
reply