He's very good at articulating his thought progress and it's always really interesting to see how he reduces the bug down into a minimal reproducible example.
Building this for the first time on my laptop. Can I just say, they realy thought about the build. Even though they have non-trivial dependencies, the build for those just trying it out is really easy, just four commands.
I readily admit that cooking the mozconfig is some holy hell, but they do have real-world examples in the repo; did you experience problems at that stage, or it literally wouldn't compile?
And while I haven't tried Ungoogled-Chromium, for unrelated reasons I helped an OSS project build their Brave-derivative in a container, so that may interest you: https://github.com/imperviousinc/beacon/blob/main/Dockerfile The only reason it's not already GitHub Actions for them is the chromium_src is 38GB and GHA cache is capped at 10G, but while digging up the link to that Dockerfile, I was reminded that there is a GHA for the macos build: https://github.com/ungoogled-software/ungoogled-chromium-mac...
It's a NEW browser in every sense of the word - a new rendering and JS engine, not built on top of Chromium's (or Firefox's) backends. It's impressive because most people have a few preconceived notions: a) It is impossible to build a NEW browser given the complexity and huge history, b) browsers need to rely on JIT and other tricks. This browser is new hobby project, from scratch, breaks away from the preconceived notions of what a browser should be, and gives a glimpse of a future where another open-source project has some influence in web standards instead of the monolith companies that we have now (this is a pipe-dream of course, but we're in a better position now than we were a year ago).
Edit: And I would say to others, please don't downvote a question because I think it was genuine and not meant with malice - and provides a good learning opportunity for EVERYONE
The work is impressive, but I think you're putting up some straw men (or inaccuracies):
> It is impossible to build a NEW browser given the complexity and huge history
Building some browser is certainly possible. What's difficult/impossible is to build a browser from scratch which runs 99% of existing websites correctly.
> browsers need to rely on JIT and other tricks
... to be fast in today's JS-rich websites. I believe that as well.
Great quick overview of some of these challenges and thank you for taking my question at face value, it was asked honestly. Pretty cool what the dev is doing.
The “from-scratch” part is key. (The “can display Google Docs” part is also, well, correct in a strictly technical sense, see linked screenshot. It is still a significant achievement—even GNOME Web / Epiphany, based on WebKitGTK, has some problems running Google Docs.) The browser is based on a new engine that started development in 2019. It was originally written for a hobby OS with its own toolkit and only ported to run on top of Qt as well some months ago[1]. The original author of said OS used to be employed working on WebKit, as far as I understand.
Nobody is trying to impress you. There have been no new browser engines for years, with the idea that its impossible to develop one anymore being a meme people would espouse regularly.
This is impressive as a group of random interested people have made significant progress to making a functional web browser off the back of passion, not billions of dollars. Andreas has be an incredible force for showing what bringing people together through passion and openness can do.
No worries, I can see how it sounded. The project sounds pretty cool. I really dig the 90's UI of SerenityOS. Do you think the OS can be more than a novelty? I read that it doesn't support 3rd party software. Maybe I don't fully understand what that means, but wouldn't it mean anything that doesn't ship with the OS likely wont work? That I couldn't make my own package that could be installed via the package manager? Seems like that kind of OS just couldn't satisfy people's computing needs.
Not sure what your source for ”doesn’t support 3rd party software” is, so it’s unclear what that actually means, but there are numerous examples on Andreas’ YouTube channel about porting software to Serenity, e.g. Diablo [1].
Maybe the text you read refers to the OS itself and all first party software being written without external dependencies?
Well it is now, but is also more. Originally Ladybird was the cross platform port of the from-scratch browser on a from-scratch OS. If I recall correctly, the browser on Serenity was just called 'Browser', and then Ladybird was created to wrap the LibWeb / LibJS engine from that in a Qt interface that could run on Linux and Mac, and then the Browser program in Serenity was also renamed to Ladybird.
I'm so impressed with the LadyBird / SerenityOS browser engine!
It took Mozilla more than 25 years to write Firefox's HTML engine and here are a couple of geniuses who write a web browser from scratch in just a few years!! Including Javascript (however, as of yet lacking video and WebGL).
And Google's Chromium is simply a convoluted mess, chockful with vulnerabilities and memory corruption errors.
Mozilla wrote Gecko, with enough web compat to effectively take on IE6, in about 4 years. We made a couple of significant changes in years 4-6 right before Firefox launched, but Gecko was basically "there" about 4 years after Mozilla transitioned from the old Communicator engine to Gecko/NGLayout/Raptor/Magellen.
Having said that, Gecko wasn't a Netscape start from scratch effort. They acquired and built on a very simple and modular cross platform rendering engine that was used for the preview function in an HTML authoring product called WebSuite by Digital Style out of San Diego. (This is where Netscape got engineers Rick Gessner, Peter Linss, and manager Jim Hammerly.)
So, maybe if you count the WebSuite and the Netscape efforts both, you get something like 5-6 years of effort to get Gecko up to sufficient capability to compete in the market. That's a far cry from "it took Mozilla 25 years" though.
And yes, the Web is much more capable today, which makes building a rendering engine more difficult for sure, but the 25 year comment seemed wrong enough to me that I thought I'd share some of what I know as a Mozilla participant for the last 25 years (and a full-time Netscape/Mozilla employee for the last 22.5 years.)
This is not in any way meant to take away from the efforts of any new engine devs or teams. I'm amazed that anyone is even trying given how large the platform has become and have great respect for those trying and succeeding.
I noted that in the later sentences of my comment. Yes, the platform is larger. But in some cases, it's also more sensible now that standardization has had 20 years to work it over. When Gecko was being developed, it was half implementing standards and half reverse engineering Microsoft's undocumented web APIs. Still, I'm sure it's nevertheless harder today and that's why I said so near the end of my comment.
Not really - there are certainly more of them, but - and this is extremely important - Mozilla, Apple, Google, and even MS spent the best part of a decade essentially rewriting the specifications that made up the web platform.
The DOM, etc through the work that eventually got labeled HTML5 made it possible to follow the spec and, as a super simple and "obvious" example, parse URLs in a way that worked. The existing specs of the era were often incomplete, semi-frequently what they did say did not match how browsers actually worked, and frequently ambiguous which meant that implementing anything required exhaustively testing what everyone else did to try and match it.
TC39 eventually recognized that the same issues existed in the ECMAScript spec, and recognized that ES4 did nothing to resolve those issues, while adding considerable complexity and more of the same issues. That's how we ended up with ES3.1 - it was an attempt to actually correctly and completely specify the language and runtime, ES5 continued/finished that catchup (ES3.1 did not resolve all the issues) while adding a few important features, but was still in the "make it possible for someone to implement the spec and be confident that they could run anything any other engine could run".
So in the modern browser there are many more specs, but those specs are much easier to confidently implement as you can get basic working behaviour by essentially translating the spec language into your environment.
Making the resulting engine fast, memory efficient, etc is of course another thing entirely (although even that is aided by removing ambiguity in the specs as the lack of ambiguity means that you can actually write tests now).
I'm a fan of Andreas and love seeing the progress here, but let's be fair: it didn't take Mozilla 25 years to write a HTML engine. It took Netscape 3 years to build the world's best (at the time) browser from scratch ('94 to '97).
Chromium's HTML engine derives from an open source KHTML engine, written for the KDE project.
Not trying to diminish this impressive work from Andrew, but I would believe as the saying says those last (ongoing since a browser is never finished) 20% is what take 80% of the time to make a browser.
> I'm studied the Firefox code and there are lots of raw pointers being used everywhere, instead of them using RAII.
Firefox is still in C++ and is still subject to memory corruption vulnerabilities, proving my point again.
> Anyway, wxWidgets is an example of a C++ framework which is almost flawless and comparable in complexity to Gecko.
"almost flawless" is still in the category of 'possible'. Memory corruption is not possible in Rust by default.
Surely you understand what 'inevitable' means before giving me an example of a project that has at least one memory corruption vulnerability out of the thousands of other projects that are also in C++, especially browser-based software?
And in reality, it is really going great for C++ based browsers and their engines like Chromium, and WebKit with all their static analyzers, address sanitizers, valgrind, etc have got their memory corruption CVE woes sorted /s
As Ladybird gets more complex, it will most certainly have memory corruption vulnerabilities, just like the other browsers and it will turn out to be no different.
You're really just dogmatic making assertions. I don't really see much backing these assertions up. Making them more firmly isn't making your case more convincing.
--
Let me make a silly argument. Let's say you write a program in rust, a sort of elaborate turing tape that interprets C++ code and evaluates it as a C++ program according to all standards and rules. To the end user it is indistinguishable from a C++ interpreter, but between the C++ code and the system is a layer of rust.
According to standard rustacean dogma, this program will simultaneously be guaranteed to be memory safe because it is rust, and inevitably have memory corruption bugs because it is C++. This is surely a contradiction. It cannot both be incorrect and correct at the same time.
I'm actually pretty impressed with the Chromium source code. For such a huge, old and complex project, I think it is rather impressive. It's a lot better than the Linux kernel for example, another project of similar size/age/complexity.
I think a significant factor in this is that C++ simply has better abstractions for building large codebases than C does. Any big enough C project is gonna turn out a bit funky.
This from the company with arguably the best security researchers on Earth, and probably unholy amounts of internal fuzzing and static analysis tooling
What do you mean by harder to support? Supporting - in the sense that it works - canvas rendering is much simpler than even minor amounts of the DOM. The problem of course is that you're replacing compiled C++ that is doing the normal DOM/CSS layout and rendering with someone implementing the layout and rendering in JS (sans actual bitblitting, etc).
If you’re making a site using canvas to render UI elements you are taking it upon yourself to implement a lot of standard functionality yourself, similarly even basic text entry, let alone editing, is much more complex than people seem willing to accept.
At google’s size/cash implementing these things might be achievable, but I recall many people/projects over the years that decided to do in canvas rather than using the available engine (often times a belief that their desire for pixel perfect positions was reasonable, and more than made up for needing to draw everything and manage text entry themselves) always looks easy at first glance, typically because the authors don’t use accessibility tools and most who I interacted with were English-first devs who believe one keypress event == one letter. So you end up with yet another English-only (and I really do mean English specifically here, not Western European) and inaccessible rendering engine for a web page, that typically could have achieved 95-100% of what they wanted, with a bunch of the remainder being stuff that doesn’t work for people with accessibility issues.
It's honestly incredible the progress being made by Andreas.. all while also working on an OS.
Not only that but the quality of work is incredible too.
While technically true it would be more accurate to say that no features are _planned_ in advance and it’s entirely up to individual contributors to decide what they work on.
Link is to a Nitter instance. Even though I prefer to use it as well, it should probably be pointed to the original Twitter instead? For deduplication purposes if nothing else.
Yes. Changed now from https://nitter.1d4.us/awesomekling/status/158971149967250227.... It's fine to include alternate links in comments, but for the submission URL, please follow the site guideline: "Please submit the original source. If a post reports on something found on another site, submit the latter."
To second this, many of the nitter instances and other twitter flattening sites are blocked by corporate security scanning proxies. For those in that situation, note that you can replace the domain name with twitter.com to get the original URL.
I'm not personally aware of filters targeting Nitter instances significantly, but I'm aware of whitelisting only the top 10 million sites (blocking "undesired" things of course) and manually asking IT if you are visiting a low-traffic site. This stops a lot of phishing attacks but at the same time only allowing ~9.5 million sites is still too small.
It seems like it will only work on SerenityOS, which seems like it explicitly does not support third party software. Someone correct me if I'm wrong about that, but that seems like an indication that it wont be much more than a toy.
EDIT: I was wrong, this doesn't just run on SerenityOS. Guess it's not limited to being a toy. Thank you everyone for the correction.
More power to them, but modern browsers are one of the most advanced and sophisticated pieces of software in the world, probably right next to an OS. WebKit and Chromium respectively have had billions of dollars and decades of development poured into them. The current duopoly state of the market reflects that fact, unfortunately.
Firefox on Linux doesn't even run a decent number of sites well for me, I assume because they're primarily tested on Chrome. And Chromium often chokes on Google's own voice.google.com web app. Ladybird doesn't have to be perfect to be useful or interesting.
Thanks. You can blame autocorrect for that one. No doubt more of Google's anticompetitive behaviour using Android keyboards to surpress rival browsers.
Fixing a CSS layout bug found by chessboard.js - https://www.youtube.com/watch?v=bMpLiEgKC_w
Let's make "Cookie Clicker" playable in Ladybird! - https://www.youtube.com/watch?v=W4SxKWwFhA0
He's very good at articulating his thought progress and it's always really interesting to see how he reduces the bug down into a minimal reproducible example.