Hacker News new | past | comments | ask | show | jobs | submit login
Fuchsia Workstation (fuchsia.dev)
367 points by timsneath on March 28, 2022 | hide | past | favorite | 445 comments



Since the build page doesn’t provide much context, Fushsia has 3 different configurations bringup, core and workstation.

The workstation configuration is described as:

> workstation is a basis for a general purpose development environment, good for working on UI, media and many other high-level features. This is also the best environment for enthusiasts to play with and explore.

From https://fuchsia.dev/fuchsia-src/development/build/fx#key-pro...


Having used several Google OSS projects I would NEVER consider using an OS from Google.

The documentation structure is often atrocious, the are always dozens of recognized but unresolved bugs, those bugs often remain for years, the documentation often is not updated to reflect the state of open bugs so code samples and tutorials are often broken, the bugs are resolved slowly, non-Google pull requests are often left to die on the vine[0], no road maps, the Google employees use an internal build system and the external packages are often broken, and these three words: "Not. A. Priority." If your bug is not an internal Google priority you are basically out of luck. Hope you like maintaining your own patches, for years.

I'm not saying that OSS projects from other large projects don't have similar problems, I'm saying that every Google project I've depended on has had all of these problems.

The idea of my OS being run like this? Absurd.

This just is a guess, but, I'd wager that most Google OSS developers just wanted a fancy corporate job and got stuck interfacing with the public on an OSS project. I don't think that most of them aspired to be OSS developers and it shouldn't be a requirement.

[0]: Just today a two year old bug that I've been watching was closed by merging a pull request that was first opened in early-2020, closed as stale in mid-2021, and re-submitted/merged this week.


> The idea of my OS being run like this? Absurd.

Isn't Android run like this? Is the community participation in Fuchsia different from that of Android i.e. Build inside Google and publish the code to AOSP?

Regardless of how 'open-source' android is, I'm grateful for its 'openness'. Cannot imagine mobile compute held solely by walled-gardens and carriers.

Anyways Brian Swetland is the person to thank for those who appreciate the openness of android[1].

> “Peter mentioned something about exclusivity for our first device, which Brian overheard. By the time we got back to our hotel room, Swetland had threatened to resign because ‘I didn’t join Android to become another Danger.’ I was concerned because Brian was so critical to our success, but when I saw him the next day everything was fine.”

> He was strongly in favor of Android’s vision for an open and independent platform. He threatened to resign several times during his time on the Android team over decisions that would have resulted in a closed platform.

[1] https://arstechnica.com/information-technology/2021/08/excer...


Not so fast, in areas where the bug can be phrased as basic compatibility with a recognized standard it is often easier to get Google (or Apple, or ..) to flinch than areas with a mixed standard/implementation they think they own.


> Not so fast, in areas where the bug can be phrased as basic compatibility with a recognized standard it is often easier to get Google (or Apple, or ..) to flinch

Unfortunately, Like the parent I've been following couple of bugs for years which Google didn't 'flinch' yet; 'Chromium on Linux not respecting prefers-color-scheme when dark GTK theme is set'[1] is one such bug.

[1] https://bugs.chromium.org/p/chromium/issues/detail?id=998903...


It looks to me like they tried to flinch or at least didn't close it like almost everything UX I've seen on the chromium bug site. Pretty impressive given that while gnome is external its not an actual standard of the sort the Posix or RFC networking gangs kick each other in the crotch over. If they aren't publishing formal standards to an outside body that starts picking up independent interested parties, I think they will have more options to go down from their current quality whether or not the current quality is acceptable to critics today.


I think the way to deal with these kinds of community-hostile (or community-abivalent) corporate open-source projects is to just hard fork them once they reach a stable version, and let the community fork break compatibility. You lose the continued efforts of the internal development team, but you get a stable foundation to fix bugs and add polish.


Are there screenshots or videos of Workstation?


Came to comments for this. Screenshots or it didn't happen.


People are saying Fuchsia is only usable on IoT devices, not much progress, etc.

However, not many new operating systems have a Chrome browser [0] on them and Fuchsia has this, not many people realise how huge this is let alone on a new from the ground up OS.

So I don't think Fuchsia will be going away, in fact I would say it is coming pretty fast.

Whether you like it or not, this tells me that they are targeting the desktop, laptops and chromebooks next.

[0] https://youtube.com/watch?v=Rg9YgWXLfEo


In part this is because it's (very) hard to get the Chrome team to take a new OS port. Google's internal OS probably has some advantage here.


Isn't the point of Chrome's Ozone stuff to add a proper platform interface, so that making ports becomes straightforward?


It's being used in a consumer products (the Nest hub)


> However, not many new operating systems have a Chrome browser [0] on them and Fuchsia has this, not many people realise how huge this is let alone on a new from the ground up OS.

Enlighten me, why does this matter?


Modern browsers use an extremely wide range of system APIs: process/IPC/sandboxing, memory, networking/connectivity (e.g. bluetooth), filesystem, GPU, audio, windowing system, device IO (keyboards, webcam), and more I'm sure.

I can't think of any other kind of software that come close. If you can run a modern browser it means that your OS is already quite mature.


It shows how mature Fuchsia is. Chrome is an enormous piece of software that requires a lot of things to run.


Google has been investing in Fuschia for years now. What's their end goal with it?


When I was there I was baffled by its sudden rise and the support it seemed to have from senior mgmt.

My impression when it was announced was it started as a really neat personal project by some long-time OS development nerds (I mean that in a good way) that then escaped the gates and then it became a hammer in search of nails.

It ended up eating the project I was on, and was many many years late in the process of doing so, breaking schedule promise after schedule promise. I kept waiting for management to put a lid on it, but they seemed to have infinite headcount and budget, and patience from mgmt to support their mission to rewrite the world while we struggled to get time and resources to maintain the codebase we already had which was shipped as a profitable product and sitting in millions of people's homes with good reviews.

Google has OS politics dysfunction... especially in hardware devices where there are almost a half dozen Linux distributions fighting it out for marketshare. Fuchsia just added to that craziness.


Can you steel man the case for Fuchsia? What's the optimistic result that justifies this (apparently rather expensive) bet?


The drivers situation for embedded Linux SoCs is awful. Fuchsia aims to fix that. Security model is better. Licensing I guess is easier to deal with. The OS structure is explicitly modeled for the kind of consumer hardware projects in question instead of, y'know, a PDP-11.

None of this changes that the rollout for these things could be done incrementally in parallel with the existing Linux based platforms, with an eye to produce shared components whenever possible instead of rewriting the world. The Fuchsia folks were notorious for their obsession with purity of essence, and because they had the headcount and $$, they just did what they wanted.


Fuchsia's driver handling is superior to what you get with linux. Because pretty much everything is pushed into userspace it makes it dead simple to keep a driver going forever.

Were I to guess, the prime motivation for Fuchsia would be to have a phone or IoT device which could regularly receive kernel updates without needing hardware vendor interaction. That's the biggest sore spot for linux. (not that I think Fuchsia's kernel would require a bunch of updates due to how simple it is).

This is an outsider's perspective. Definitely a big expensive project that reminds me of other big expensive google projects which seem DoA.


> Fuchsia's driver handling is superior to what you get with linux. Because pretty much everything is pushed into userspace it makes it dead simple to keep a driver going forever.

How does the lack of hardware vendor cooperation get improved by moving the problem into userspace?

You still need/want the hardware vendors to create drivers and update them, which they frequently don't.

I guess it's better because you're just going to be content with binary blobs?


Yes, that's exactly the point: unlike Linux, and like Windows, Fuchsia defines binary interface for drivers. As long as new releases of Fuchsia keep supporting the old interface, old driver blobs will keep working.


It's common to conflate the idea of source availability with problems encountered by lack of binary stable interfaces. One of the problems with the status quo on Linux isn't that drivers aren't released as source, but that they aren't upstreamed. This causes the drivers to no longer be valid very quickly. If you solve the validity problem, folks can continue to release drivers, alongside their source and they will continue to be valid even as the rest of the operating system evolves. Just because Windows drivers tend to not have source available doesn't mean that it's a given the same will be true for fuchsia drivers. It is ultimately product makers who use fuchsia as a platform who drive the incentives for what driver authors do with their drivers.


I don't think it will DoA. They shipped it on Nest Hub, and I'm sure they'll find other places to put it now.

And yeah I think you're pretty on about the driver/vendor comment.


>Security model is better.

From the outside looking in I thought the security model could really help Google lower splash radius from zero days? The feature-set certainly sounds appealing just reading the marketing blurb. [1]

With any luck in the next iteration Google will create a Fuschia ISO or VMDK so people who want to give it a spin without building it can quickly get a taste of the environment. The fact you can at least run it in an emulator is definitely a step up from requiring dedicated hardware, which was the previous process. [2]

[1] https://fuchsia.dev/fuchsia-src/concepts/principles/secure [2] https://fuchsia.dev/fuchsia-src/development/hardware/paving


I believe Google can make a system that is secure from outside adversaries, but I'm less trusting that they would make an OS that isn't riddled with hooks for Google surveillance.


Certainly. Sad thing is how many won't make the switch. I'm not judging, I'd do the same prolly due to pressures(ease of use, bandwagon, specific pain point fixes vs Linux)


> The drivers situation for embedded Linux SoCs is awful. Fuchsia aims to fix that.

How can they solve that with software? It's an incentive/economic problem. Hardware vendors don't want to play nice, there's no incentive for them. It's better if they're jerks and never release their stuff or do it years after their hardware stops being relevant.


Replace and unify android, chromeos, their various iot devices. Eventually it becomes the base image for faas, app engine. Complete domination.

Now if they have Microsoft/apple level os penetration you can be damn sure it's gonna be only serving google search. Just to ensure/build more moat aroing Google search can justify infinite cost


>Now if they have Microsoft/apple level os penetration you can be damn sure it's gonna be only serving google search.

Unless they want to do crazy things like sell to the EU market, in which case they're back to having to offer multiple search engines.


well I mean, userspace would still be up in the air and there's no reason to have google search in a kernel right?


Yea, but I'm excited to see what a modern kernel can do that's master planned with our current knowledge. I don't know too much about the Linux kernel. But I imagine there's alot of cruft and inefficiencies because it grew over a long period of time in an ad hoc manner


You aren't a TRUE computing technology company without your OWN OS, bonus points for a boondoggle one. That's like a hypercar, not just a supercar.

IBM has, what a half dozen between mainframes and AIX and older ones?

Apple has their own OS of course.

Microsoft of course.

Facebook? Netflix? Amazon? Such second-rate "tech" companies, tsk tsk.


I'm on a project where money doesn't seem to matter and it's obnoxious.


It produces toxic effects. Like an algae bloom.


Tbh, I would expect Google to have enough money to do both [develop new OS] and [maintain/improve existing products].


Plenty of money, but not enough discipline and focus to make that happen.

After years in this industry and seeing so many things like this just fail, my perspective is that the "right" way to do it (rewrites/replatforms) is build shared components and have teams that deploy to both and so live in both worlds.

This way the old gets the new, and the new has to live and breathe the reasons for the compromises that the old lived through the first time. And you don't end up with a "legacy" team and a "future" team.

Example: I tried (a bit half-heartedly and I was low-status so nobody cared) to pitch that we write a new screenreader for Home Hub rather than brutally kludging in the one from ChromeOS. A new screenreader that could be shared with Fuchsia, as they were writing one from scratch. If building from scratch why not build one as a component that can be shared? That approach was seen as a total non-starter by the people I talked to. Fuchsia got to write their shiny new thing basically in isolation and barely interacted with us poor folk who had to keep maintaining the actual-existing screenreader deployed on several million devices in people's homes. Mainly fell to my friend/coworker who was a hero.

BTW we both quit the same day.


If sharing this won't get you in trouble, I'm curious about why you believed that the ChromeVox screen reader wasn't a good fit for Home Hub. Was the Home Hub UI not HTML-based? Or did you see some design flaw in ChromeVox (which, if I'm not mistaken, is open-source so you should have no problem going into detail on that)? Feel free to email me privately if you prefer. Thanks.


From the user perspective, Chrome Vox on Home Hub feels... bolted on. It uses the assistant TTS, probably over the network, so lag is extreme (a thing you don't want in your primary interface to a device), and random portions of the UI don't reliably read.


As a totally blind Home Hub user who finds Chrome Vox completely unsuitable for the platform, I'm saddened (though not surprised) that a Fuchsia screen reader never could see the light of day.


I believe Fuchsia does have its own native screen reader. I have no idea if it's used on the home hub, but I would hazard a guess that it is.

https://cs.opensource.google/fuchsia/fuchsia/+/main:src/ui/a...


They already do this - ChromeOS has been a thing for years.


Nest Hub?


Well, they say it's not intended to be an Android replacement, but I'm persuaded it's intended to be an Android replacement.

The main advantages over Linux are a clean room design, better security-by-design (having a microkernel means the attack surface is way smaller, capabilities means sandboxing-as-a-security-mechanism becomes possible), and more control over the project.

Honestly, I think the other commenters are being way too cynical about this. If this project gains traction, even if Google goes evil with it, forks and community distributions could still happen like they do for Linux, and be immensely beneficial. The sandboxing alone is a huge benefit.


Android already has a sandboxing-first design. Every app runs in its own sandbox done via different Linux users. It's a bit crude to use a different user per application, but it's effective & robust.

Android is also pretty unapologetically POSIX/Linux and doesn't shy away from exposing that to applications (eg https://developer.android.com/reference/android/system/Os ). So I don't think Fuchsia would replace the Linux kernel in Android. It'd have to be far more rewarding a migration to justify the massive ecosystem breakages that would result (for both apps & OEMS)

Windows XP proved you can do it, but that at least came with a massive (real world) improvement to things like security & stability that were appreciable upgrades to end users.


> So I don't think Fuchsia would replace the Linux kernel in Android.

How hard would it be to add a posix subsystem to Fucshia?

My outsider opinion is that the Oracle lawsuit increased motivation for alternatives to Java (the language), and Google decided to put more wood behind the Dart/Fuchsia arrow.



With an explicit call-out for running Android apps, no less!


> It's a bit crude to use a different user per application, but it's effective & robust.

This is how I run services on my home server. Plex runs in a rootless container under user plex for example.


IMHO Fuchsia will be a massive (real world) improvement over Linux in security.

Also, 5 or 7 years from now, on which OS will Chrome or Chromium run better? Fuchsia or Linux?

For the past 12 months, I've been running Chrome "on Wayland" (without XWayland in between) and although it is definitely usable, there are many small bugs some of which has existed the entire 12 months.

(And will Firefox even be maintained 5 to 7 years from now?)


> Also, 5 or 7 years from now, on which OS will Chrome or Chromium run better? Fuchsia or Linux?

Linux's GUI layer is such a huge weak spot, and it doesn't look like Wayland's gonna fix that (it seems like it's not even set up to address the most serious problems, really).

If they put Fuschia on Android devices & Chromebooks, that'll be about the end of the story for consumer-facing Linux. Then if they can make it work well as a container-hosting server OS and decide to push it for that purpose... well, the year of the Linux anything might be behind us, then.


Neither ChromeOS nor Android use any desktop Linux software such as X, Wayland, Gnome, KDE, or otherwise. As far as I know Chrome still runs fine on Desktop Linux despite this.


> IMHO Fuchsia will be a massive (real world) improvement over Linux in security.

Exploits in the Linux kernel are very few & far between. How would Fuchsia represent a massive (real world) improvement in Linux over something that basically doesn't happen?

By contrast for the Windows 9x -> NT kernel transition, the 9x kernel (in Windows ME at the time) had rampant worm issues and was notoriously unstable in very significant & practical ways, like plugging in USB devices would trigger BSODs with some regularity.

These days the majority of kernels (Windows, Mac, and Linux) have vanishingly few exploits and are for the most part extremely stable. There's not much to improve on at this level.

> For the past 12 months, I've been running Chrome "on Wayland" (without XWayland in between) and although it is definitely usable, there are many small bugs some of which has existed the entire 12 months.

Note that neither ChromeOS nor Android use Wayland or X11. That compositor fight that desktop Linux can't move on from isn't something that plagues anybody else, so there's nothing for Fuchsia to "fix" there.


> Exploits in the Linux kernel are very few & far between.

That's an interesting take on multiple code execution bugs per year. And not via drivers, but userland-exploitable code in general subsystems.

Unless you're referring to remote code execution, which in the era of ubiquitous web applications (often running involuntarily through advertisements, etc) seems like a distinction without a difference.


> And not via drivers, but userland-exploitable code in general subsystems.

We're exclusively talking about the kernel+drivers here. User land exploits are irrelevant (and obviously not something fuchsia will be immune to).

> Unless you're referring to remote code execution

I'm referring to exploits that actually are found in the wild to have caused damage that a change in kernels would have done something to prevent.


Drivers are the commercial case for Fuschia. But in general, microkernels make it much easier to 1) implement privilege isolation for subsystems and 2) implement subsystems in a more secure manner, both of which absolutely improve security posture. A subsystem is just another type of driver. Though, it depends on how well Zircon makes use of this--i.e. avoids implementing all the most critical subsystems in the same process, or otherwise abuses too much unprotected memory sharing among them.


> 1) implement privilege isolation for subsystems and 2) implement subsystems in a more secure manner, both of which absolutely improve security posture.

Sure, but Android already has that via a user per application for app sandboxing & a very extensive selinux policy set[1]. Which makes the real-world benefit of that seemingly very negligible. There's a huge gap between desktop Linux & Fuschia/Zircon here, but there doesn't seem to be a particularly big gap between Fuschia/Zircon & Android Linux.

1: See all the .te files in the public & prive dirs of https://cs.android.com/android/platform/superproject/+/maste...


AFAIK, exploits for linux don't typically happen in core linux code, but rather in the drivers.

That's what fuchsia bullet proofs. Drivers are isolated from the kernel such that an exploitable driver doesn't also give the exploit root access.


Sure but even still exploits in kernel modules are also extremely rare. The vast majority of exploits are in getting userspace to do something it has permission to do but in a way that it didn't want to do it. Sandboxing & permission systems help here tremendously, which Android already has a pretty robust & extensive system (not just the normal app permissions, but also a massive set of selinux policies controlling what a given system service can do).

Desktop Linux is pretty far behind the curve at this point, but Android/iOS aren't (and increasingly MacOS/Windows are fixing things up)

Fuchsia seems like it'd be an incremental improvement here at best, and "real world" improvements even less clear than that.


Firefox stems from mozilla back in 1998 only 5 years after Mosiac and 3 years after the original IE. It seems exceptionally likely that Firefox will continue in some form for the foreseeable future.


> And will Firefox even be maintained 5 to 7 years from now?

Hopefully! It would be a bummer to have to switch to some hipster browser like Suckless Surf (assuming browsers made by advertising companies are not candidates for obvious reasons).


What google is trying to do here is equivalent to getting everyone to switch from IPv4 to IPv6. Yes, it's superior. Yes, there'd be massive benefits. Yes, it isn't likely that google will go super evil.

However, google isn't the only player in this. They need to get all their vendors to rewrite drivers for their new kernels. They'd also need everyone using android, for example, to get on board writing drivers.

Meanwhile, one of the largest players, samsung, is looking at making their own OS.

For an already fractured ecosystem, it just doesn't seem like it's something that can be successfully pulled off. Maybe if google/android was more like apple it'd be doable. But not where there are so many players that need to be brought in the mix.


I agree - I think only a company with pockets as deep as Google could pull off creating a brand new operating system in today's world. I'm curious to see what comes of it.


I agree with the premise, although I disagree that only Google could do this. Honestly there are a low number, but greater than one, entities that could do this. Microsoft is an obvious one, for instance.

That said, to your bigger point -- I think the way to pull off a brand new OS in today's world is to write something that scratches an itch that a small-ish group of people have. Do it well enough, and it will grow into something bigger. In fact this is exactly how Linux got going. It will work again, guaranteed.


I would actually love to see a full, from-scratch rewrite of Windows. I'm not sure if it's ever been done before, but it definitely feels like it hasn't.


Well, Windows NT was that.

It was even on the verge of being a microkernel and it could run a super light weight virtualization (for the lack of a better word) system: OS personas.

It would be interesting to get Windows NT NextGen, true.


You might be interested in ReactOS[1]. Their objective is to make a 100% FOSS Windows Compatible OS, including drivers.

Currently, they're at around ~WinXP level support

[1]: https://reactos.org/


and no GPL... which google is very hostile to GPL so that alone would be a reason to replace Linux

Personally I think they have grander visions than just Android.


I believe the long-term goal is for it to replace Linux as the kernel in Android.

The technical reason, I'd say is that the access control model in Android is a kludge on top of the Linux kernel, and it is more difficult to sandbox apps. Fuchsia was made to support Android's model, and in a capability-oriented OS you get sandboxing by default.

A political reason is to not be dependent on Linux, but I'm sure that other people have opinions about there being more.


My first theory is they started it back when the Oracle lawsuit was happening over API copyrightability. They were probably worried that some Unix copyright troll would try to sue them, so having a backup OS was a good research project. Now it's just a make-work project to keep smart engineers from going to the competition.

My second theory is that it exists as leverage, to scare the Linux kernel maintainers into thinking that they could lose their biggest userbase (Android) and make them more compliant.


> scare the Linux kernel maintainers into thinking that they could lose their biggest userbase (Android) and make them more compliant.

Not so sure about that. It's not like any of them collect royalties. If anything it just adds to their maintenance workload.


It's my belief that open-source maintainers usually become attached to the fame and importance they get, and they prefer to see the number of people using their project go up not down.


I agree with the sentiments expressed by various others. 1) They can get rid of needing to use Linux, whenever convenient. 2) Possibly even replace Android too, if feasible or eventually. 3) Way more control, to make sure open-source "politics" doesn't interfere with their money, business goals, and exploitation of users info.


My shot in the dark: embrace, extend, and extinguish Linux.


One of the primary goals is to have a non-copyleft license so they can do more proprietary aspects without being forced to give back to the OS and community that made their company.


It is their right, but I share your sentiment.


Google has a reputation of extinguishing its own projects. So who knows how long their fascination w/ Fuchsia will last.


Allure of flushing down the toilet GPL software from their stack is just too high for them, to just walk away from it.


At least it wouldn't hurt to always have some flowers around for the next Google project entering the Google Graveyard ;-) (I don't expect that Fuchsia would end there any time soon.)


I thought Fuschia wasn't Linux


In theory if Fuschia fulfills it's purpose, google could have Android sit on top of Fuschia as opposed to a Linux Kernel.

This would give them far more control over the direction of the OS, far more control over millions of users, and allow far less tinkering by android users.


While Android users still tinker within their steadily shrinking degrees of freedom, their desire for software freedom has an escape route within the Android world. But what if Google succeeds in using Fuschia to lock down the consumer mobile ecosystem ? With nowhere left to go, won't users focus on some new space entirely outside of Google's reach instead of just fleeing Google around Android ?



My shot in the dark: try to keep up with Apple. They are never going to get Apple-levels of performance per watt from their current hardware and software choices.


I think it's more about moving on from Android.


There's no "extend" here. It's just "replace".


Employee retention. One of the rumors floated early on was that it was just to keep some senior engineers happy and at the company.


This argument never makes sense. Why keep senior engineers if they aren't contributing to the business?


It's defensive. High salaries and plum projects keep good engineers from building your successor.


That keeps them from contributing to your competitors, i.e. against your business.


I presume you make a deal where they also have to put time in on business-critical projects that you want their skill for.


Brain drain. Twiddle for me, not for them. Don't become one f them either.


Linux it's great, but it's far from perfect for all scenarios where could be bloated (think IoT, like Google home assistant). In those cases micro kernels like fushia, or GNU Mach make sense.


Can't see where you save with microkernel for embedded systems. You get the same as pre-configured monokernel plus the IPC overhead.


QNX, vxWorks, INTEGRITY,...


While QNX is mentioned, does anyone know of any materials detailing the implementation of MX tables in QNX? I've stumbled upon a brief synopsis[1] of what they do, but it lacked implementation details (such as: do the entries need to be aligned on page boundaries? how many copies are made at the end? etc. etc.).

[1]: <https://cseweb.ucsd.edu/~voelker/cse221/papers/qnx-paper92.p...>


That doesn't address my point. Do you really appreciably save on binary footprint simply by using QNX versus trimmed down Linux kernel build?


You mean the binary footprint of a type 1 hypervisor that Linux requires when deployed in the same high integrity computing scenarios that QNX is usually used for?


The use case suggested by the GP was IoT/Home Assistant scenarios. So no, I absolutely do not mean that.


Now that you have safe AND performant languages like Rust or Oberon, what is the reason to not have exokernels for these applications?


I guess a mix of industry not wanting it until security is a legal liability, type 1 hypervisors, microservices and containers being used to tame monolithic kernels into pseudo-microkernels, hiring practices,....


They want an OS with a stable kernel ABI so they don't have to bug device manufacturers (non-FOSS friendly ones like Broadcom and Qualcomm) for source code, or updated binary-only Linux drivers every time there is an update to Linux.


It's already on the Nest Hub Max, so they are already in the vicinity of end-goal.


Possibly replace the Linux kernel!


I find myself asking cynical things like "how will Fuchsia make it easier for them to spy on their users?"


Assuming there's a grand strategy behind this at all, it feels like more of a "steer the industry" play than a "spy even more" play.

If I'm right, the ones who should be worried are Red Hat(/IBM) and Ubuntu. Maybe Amazon, depending on what exactly Google's thinking and how much they weaponize the ability to refrain from open sourcing some of the code.


That's not cynicism, that's realism. Don't let the PR monkeys convince you that reality is wrong.


To replace ChromeOS


Cancel it.


> What's their end goal with it

Move away from the Linux kernel, because it limits their freedom of restricting users freedoms.


This is interesting. I wonder what goal they are aiming towards.

There isn't much information about the capabilities of workstation. Is it a GUI OS? Can one run Flutter apps on it?


Yes it has GUI. Flutter is the main/official way to build apps on it. The OS shell is also written in Flutter if I remember correctly. So far, the OS has been seen on or confirmed to release in near future on one Google Home device. Technically though, it seems to be designed for all kind of consumer devices ranging from phones to desktops.

IMO this is more or less an experiment which if successful, will end up merging/replacing Android and Chrome OS into a single OS. Android and Chrome runtimes can be ported to Fuschia and the underlying OS can be swapped on many devices. New devices can ship with Fuschia + an android compatibility layer. Developers would be able to ship native Fuschia or android apps. Would work on phones, IoT, Chromebooks and could be install-able on desktops.

This is all just speculation though and a lot of things need to go right for this to happen but I'd imagine this would be the ideal/desired outcome for its creators.


Maybe it's just my laptop, but most Flutter apps I've tried were clunky and draining CPU like crazy. On top of that other people here have mentioned problems with the framework concept itself which causes certain bugs to just linger for years, because they can't be fixed.

I don't really understand the excitement around flutter.


Developing new stacks from top to bottom is crazy expensive and time consuming. Modern OSes have benefited from many decades of optimisation and you can't replicate that overnight. Even a far superior new architecture will start out worse in almost every way and take a long time to catch up.

Just look at Apple's rollout of Swift. Early on it was slow and painful to use for a long time. In the last year or so it's really matured a lot, but it's been a long road and we're still not there yet for a lot of use cases.


I have experienced the same performance performance on laptop and phone (Google pay).


Yeah—the news that Fuchsia was embracing Flutter as the main way to develop applications was when my interest in it when from very high, to middling. May as well just be web apps at that point, for how they burn resources, latency, and overall feel. I was excited by the promise of an open-source QNX-like OS with a GUI layer that's not a complete mess (like the Unix-alike open source world's is) but it seems like they're heading toward something closer to ChromeOS, which I emphatically do not want.


Flutter is the easiest way to write GUI applications on Fuchsia today, but by no means the only one. There already exists a Wayland to Scenic translation layer. Adding support to Gnome or KDE is totally possible. There is even a rust GUI framework under development in the fuchsia codebase called Carnelian. Like Linux, there is no concept of native platform widgets on Fuchsia. It is up to a product developer to build up those types of layers. The Workstation product may build up such a toolkit of native widgets, but that won't bound the possibilities of what you can do with Fuchsia.


I'm very conflicted with flutter. The developer story is tempting, cross platform consistency but extendable to each system's specifics.

On the flip side there's the odd programming language, the bundle size, the game-engine like rendering (seemingly wasteful, but may improve as hardware evolves).


I find it odd that you consider Dart an "odd programming language", given it is a very "mainstream" language. One of its core goals is to be unsurprising and easy to learn.


except their web story is horrific. they render the entire thing in a canvas which means a giant F.U. to non English-FIGS and a big F.U. to accessibility as well as extensions


Definitely, I remember from reading the documentation that they could also target HTML5/css (https://docs.flutter.dev/development/tools/web-renderer) instead of canvas but I'm not sure how complete / coherent the output is.


what does FIGS stand for?


A short search revealed that it’s a term of art in localization, referring to French, Italian, German, and Spanish. Typically these are the first targets when localizing an initially-English product.

I have no idea what, if anything, Flutter’s canvas-based approach has to do with localization.

Also Flutter exposes a hidden DOM with accessibility information for the canvas. This might someday be superseded by a system like AOM: Accessibility Object Model, which is an API for directly constructing an accessibility tree for non-DOM content like a canvas.


Flutter, because it's using canvas, doesn't play well with text input for non E-FIGS languages. Even when it does work it's a 2nd class experience. Open a flutter demo, switch your OS to a Chinese, Japanese, Korean, type some CJK, watch as it shows placeholders while it goes and downloads fonts. Now try to select the text and use the OS's reconversion features (something you might not be aware of if you don't regularly use non-roman character languages) Install some language or accessibility helper extension. Go run a Flutter demo. Watch as the extension has no way to find the text because there is no text to find, just pixels.

That flutter even got this far strongly suggests the people making it are monocultural. That they are thinking "maybe someday there will be a solution" is not the right way to approach building an inclusive GUI framework in this day and age.


Aha, I think we misunderstood your initial post. You meant "non-English and non-FIGS" languages. Thanks for replying with additional context. I definitely learned a few things.

I see what you mean about Flutter having to download the 16+ MB CJK fonts and lack of visibility for extensions trying to read DOM text. It's possible there's fixes for some of these: the browser local font API could make your local system's fonts available to the Flutter runtime, which would be significantly faster, and the accessibility object model could make content visible to extensions (but only if they were rewritten to read AOM data!).

Also TIL about "reconversion" in CJK IMEs. Pretty neat!

I'm working with some of these same issues right now with a web-based PDF viewer app that runs PDFium in WebAssembly. Trying to make PDF content visible to extensions, IMEs, etc. the same way DOM content is turns out to be quite difficult.

Still, all of this works out of the box with DOM content. It's frustrating how many of these things have given up on "render to DOM".


First thing that comes to mind (as a GUI developer that is currently translating a product) is that the text renderer can't handle accents above and below Latin characters. (e.g. Let's buy crème fraîche in Curaçao) I'd also be surprised if it can handle right-to-left strings as well.


Flutter uses Skia for text rendering and layout; the same rendering engine used by Chrome, Android, and others. I'm beyond confident it can handle all of those situations.

Flutter demos have a localization dropdown for selecting different locales, of which the gallery demo at least supports many, well beyond FIGS: https://gallery.flutter.dev/#/

Note those demos have other problems that make me crazy, like the copious overuse of non-selectable text. Why the default "Text" widget is non-selectable is a total mystery to me: https://api.flutter.dev/flutter/widgets/Text-class.html You have to instead use the "SelectableText" widget: https://api.flutter.dev/flutter/material/SelectableText-clas...

I have plenty of complaints about Flutter, but accessibility and localization aren't among them.


Except Skia is not a text renderer. Skia won't correctly position glyphs for you. It can only render glyphs for you at positions that you provide.


Your understanding of Skia may be out of date.

See SkParagraph: https://skia.googlesource.com/skia/+/refs/heads/main/modules...

And the experimental SkText API: https://skia.googlesource.com/skia/+/refs/heads/main/experim...


It's possible; I haven't looked at Skia for some time. As far as I can tell, in the past, https://skia.org/docs/user/tips/#does-skia-shape-text-kernin... applied.


Fuchsia does a lot of interesting/new things, but the primary motivation for Fuchsia was to make it possible to update the OS independently of drivers. Everyone complains when their Android devices stop receiving software updates, and nearly always the reason that new Android versions can't be backported is that the devices have custom hardware drivers (perhaps closed source) that would need to be be updated to work on newer versions of the Linux kernel.

Since Fuchsia was started, Google has committed to maintaining a stable kernel ABI for AOSP, what they call the GKI. This is a huge amount of work for kernel developers at Google but it will allow (in theory) updates to newer kernels without the need to update these old drivers. I believe ChromeOS is also going to use the GKI (not 100% sure about this though).

I think the big question for Fuchsia is how successful the GKI initiative is, since it takes a lot of wind out of the sails of Fuchsia. Linux already works, has a lot more features than Fuchsia, and has much better performance.


> ... Android and Chrome runtimes can be ported to Fuschia ...

Android is already being ported to Fuchsia for quite some time now.


I thought Google would no longer follow the idea to replace Android with Fuchsia. Do you have a source for this statement?


Yes, check the AOSP Gerrit commits related to Fuchsia, quite a few of them there.

You can start with this one,

https://android-review.googlesource.com/c/platform/manifest/...


There is no compelling reason to keep Android running as some legacy option when they are making Android and more generally Linux compatibility as a goal.

It would just be a slower, less secure option with years of cruft and a huge dependency base they don’t control.


Project Treble and Project Mainline where Linux drivers are considered "legacy", and already Rust's adoption even with nightly features, don't really show that.

They are only upstreaming to decrease the development efforts in Linux kernel itself, tomorrow that kernel can be Zirkon instead.


I don’t know I disagree with anything you said other than the end goal.

But yes, I agree, I think getting the Linux kernel out of Android is an obvious next step.

Does Android just become another Fuschia “product” at that point the same way the workstation and a stripped down IoT one are?

I’d argue it goes further than that but my reasoning kind of gets pretty deep into how Fuschia for example handles things like app delivery etc…

I think Android will continue to exist as a supported interop style solution for years but ultimately you will end up seeing a total transition to 100% Fuschia across all Google platforms (server, IoT, Workstations, mobile devices etc).


I would say that the facts speak for me, it is the Project Treble HAL documentation call refers to Linux drivers as "legacy".

https://source.android.com/devices/architecture/hal-types

> "HALs expressed in HAL interface definition language (HIDL) or Android interface definition language (AIDL). These HALs replace both conventional and legacy HALs used in earlier versions of Android. In a Binderized HAL, the Android framework and HALs communicate with each other using binder inter-process communication (IPC) calls. All devices launching with Android 8.0 or later must support binderized HALs only."

While Linux folks are still arguing if Rust makes sense to be adopted in the kernel and what features still need to be stabilized,

https://source.android.com/setup/build/rust/building-rust-mo...

And as I pointed out in another comment, here are the ART commits supporting Fuchsia,

https://android-review.googlesource.com/q/fuchsia

I should also note that while people pat themselves on the back, because Android uses the Linux kernel, from Google's point of view that is an implementation detail, not supported as public API for NDK code.

https://developer.android.com/ndk/guides/stable_apis

From NDK point of view for app developers, the underlying OS is a generic OS, with ISO C, ISO C++ standard libraries plus a couple of additional stuff. Something that Termux guys have a hard time to swallow.


Wow, that's good to know thank you, you brought a lot of good info.


Anyone noticed how the GN tool used to build Fuchsia looks like someone really wanted to use Bazel but for whatever reason they had to use Ninja?


Historical reasons - when Chromium was first open sourced, they really wanted to use Blaze, but that wasn't open sourced yet, so they built gn. Bazel (an open source subset of Blaze) wasn't a thing until much later.

Fuchsia is really close to the Chromium team and inherited gn and most of the supporting repo management and build tooling.

gn is less strict than Blaze and Google probably doesn't want to maintain both Bazel and gn forever, so it stands to reason that gn will eventually be replaced with Bazel. That will be a really large effort so it's probably not happening anytime soon and will take a while.

Android, which uses yet another home grown Blaze-alike build tool - Soong - is already in the process of migrating to Bazel. Moving from Soong to Bazel is probably much easier than going from Ninja/Make to Soong due to how conceptually similar those are.


Of course, Android moving from a build system called Soong will mean they lose the best possible name for their build system.

https://memory-alpha.fandom.com/wiki/Noonian_Soong


I wasn't on the Chrome team, but on teams adjacent to it that used the Chromium codebase and toolset, and my recollection is that before gn there was gyp. ("Generate your project"). Gyp was a lot less Blaze-like.


Also I should mention one big difference between Bazel/Blaze and GN (and Gyp) is that the latter doesn't do the actual build, it generates a build that Ninja executes, instead.

Blaze owns the whole thing, executes the toolchain, etc.

Blaze is also very much built for the monorepo model we had in Google3. Not that it won't work in other scenarios, but that's where it came from: one repo and when it comes to third party dependencies, one version, etc. And cross-compiling on it felt awkward. GN/Ninja support for multiple toolchains and cross-compiling felt more thought-through, by necessity.


> GN/Ninja support for multiple toolchains and cross-compiling felt more thought-through, by necessity.

I'd argue this is still the case right now, but toolchain support in Bazel is maturing quickly.


It seems that with current level of Bazel toolchain support it becomes much easier to crosscompile.

Well, something I'll test hopefully somewhat soon, as I want to test bazel explicitly in very crosscompiling environment...


It is done by some teams at Google, so definitely possible to do it decently enough to ship something.


My goal is to build bazel support for Ada, including building of crosscompilation toolchains and platform runtimes. Will see how it will work out :)


This is correct. Gyp came first, then Evan Martin built ninja as a faster make, then Brett Wilson built gn ("generate ninja") because Gyp wasn't intuitive or convenient for a lot of use cases. I don't know if Googlers had an internal Blaze back then - I'm an outsider :)


Yes, blaze predates gyp, AFAIK. Blaze had been around for a long time before I ever arrived there in 2011. But exclusively for Google3 projects. Later I was on teams that used it for iOS dev, but not sure if that was always the case.


How do the build requirements for bazel and fuschia compare? Last I checked, bazel required python and a JVM and that seems like a lot for bootstrapping an OS.


Bazel's single-binary distribution brings its own JVM, it has less dependencies than gn.

Most people use Bazelisk: https://github.com/bazelbuild/bazelisk


Beyond that, it was also easier to conceive of a path towards self hosting with GN since it's written in c++. Neither java nor python can run natively on Fuchsia even today. That said Fuchsia's build system has a fair bit of python scripts anyways these days.


Is anyone providing unofficial builds of Fuchsia? I can see from the article the requirement is to build it yourself but I'm lazy/time poor and I'd really like to try runnning Fuchsia in an emulator.


Fuchsia is the most serious OS that has significant components written in Rust at this moment, so this is pretty neat!


Depends on how you define "serious"; for desktop-ish, sure, but for embedded, for example, the Hubris OS we've written at Oxide is a key component of the entire company's product. There's a lot more diversity in the embedded space in general.


TIL about Hubris, very cool! One could probably also mention the bunch of hypervisors, as they run on the bare metal as well, and maybe Tock, and I'm probably unaware of a bunch. Rust is definitely a hot language when it comes to OS development, which is really great.

I've cloned hybris [0], it seems to have 48k lines of Rust source code. Maybe there are other components that I'm missing. Fuchsia [1] on the other hand had 2.1 million lines of Rust in Dec 2020 [2], and according to tokei has 3.3 million as of now (8b51db9e2b809, March 28 2022), more than it has C++ (2.2 million) and C (366k) combined.

For comparison, a full check out of the rust-lang/rust repo with all the submodules which contains rustc as well as tools like cargo, rustfmt or clippy, and their test suites, contains 2.1 million lines.

But yeah you can come up with several definitions of "serious". Is an OS that an entire company bases its revenues on more serious than a research project that some call as a way to maintain senior developer retention, but may one day replace components of one of the most deployed end user operating systems in the world?

[0]: https://github.com/oxidecomputer/hubris

[1]: https://fuchsia.googlesource.com/fuchsia/

[2]: https://www.reddit.com/r/rust/comments/k9djda/expanding_fuch...


> it seems to have 48k lines of Rust source code.

Yep, we are much smaller than Google and when you're fitting stuff into an embedded space, you need things to be much, much smaller. We couldn't fit the output of all that code on the chip even if we did write it.

If you do `cargo vendor` to make the comparison equal in that sense, loc (which I use instead of tokei for no real reason) says there's... 100MM lines of Rust? That seems like quite the bug, lol.

> Is an OS that an ....

Yep, agree 100%. It's part of why I found it so interesting you chose "serious" as the term; not that it's bad of course, but made me think of exactly this question. I have no idea what the answer is. I do think it's a good word to describe a difference between something intended for production and a hobby OS, of which we both know there are many in Rust.


Keep in mind that fuchsia vendors third party libraries, so a decent chunk of that code is third party libraries. That doesn't undermine what you've said, but I wanted to just highlight that not all 3M+ lines of code were written for fuchsia.


Good point, I've missed that. For a fair comparison with rustc or other OSs you should probably either vendor sources for the other code bases too too (e.g. for rustc take the official source tarballs), or delete the third_party/rust_crates directory in fuchsia. I did the latter and got 1.68 million lines of Rust in Fuchsia. Still quite impressive.


Redox OS is a serious effort focused on workstation use, and AIUI there are others as well for more embedded stuff.


Correct me if I'm wrong, but IIRC Redox has nobody working on it full time as part of their job? Fuchsia has whole teams.


How much hardware support is there in the public Fuchsia source and how much of it is going to be proprietary?


I think the key thing to watch is what they will replace Qualcomm's firmware with; if at all. Apple made a move a few years ago to reduce their dependence on Qualcomm. From what I've heard, Apple is rolling out their internal replacement for their 5G modem from year or so.

Basically Android is OSS linux with a lot of proprietary drivers that are not under the control of Google. And a lot of those are provided by Qualcomm. What they provide is effectively an OS inside an OS. Linux is a somewhat hostile environment for proprietary drivers and it necessitates some technical steps to insulate against the 'viral' nature of the GPL.

Google likes their dependency on Qualcomm just about as much as Apple does. It would not surprise me if they eventually move to their own in house 5G stack and I bet that would be a proprietary fuchsia exclusive. Proprietary ensures you get your software from Google (or Qualcomm). And making it exclusive to Fuchsia ensures they have full control over the combined package.

Chinese manufacturers are also insulating against being dependent on Qualcomm. E.g. Huawei has their own 5G modem and is of course also in the base station market.


Anroid could be barely named Linux. More appopriate Google/Android or Google/Linux - in relationship to GNU/Linux. Google seems to do everything to push out GNU and the GPL. Especially the compatibility layer "PlayServices". Aside from userland the closed-source modules in the kernel are a plague. I prefer a strong BSD and Linux community over everything just controlled by a single company. Therefore:

We shall be worried. My feeling and past experience tells me "Don't use it. Don't support it. Or we will suffer."


I bet that most of drivers will be proprietary. It seems the main motivation for Fuchsia: to allow windows-like model for drivers with stable driver API, because Linux-like model does not work well for hardware manufacturers.


Time to buy a PinePhone then.


You basically need a compiler farm for fuchsia. I tried compiling it once and it took 3 hours, only to find out to need to recompile to add anything


I didn’t know compiler farms were a thing, but it makes perfect sense.

During university, our computer security class involved finding exploits in Firefox. It took 8 hours to compile on our average student grade compute. There were probably faster incremental ways to build, but navigating the Firefox open source code base was hard to entry point as a student. Effectively, most people got to compile Firefox a few times, but no one went further than making any code changes or discoveries. Navigating the Firefox build and tool chain is probably an undergrad course in and of itself, much less make any meaningful merge request or finding a vulnerability.

Overall, horribly designed curriculum. The expectation was for undergrads to find a zero day in Firefox. The professor was a researcher from Microsoft, but didn’t seem like he had realistic expectations for undergrads. Maybe they were throwing darts, hoping one student finds an novel exploit, and they can be cited. In the end, no student found an exploit or made any code changes. I was the only who found a DoS code payload for Firefox, but it was neither a zero day (poking around a known less developed API) nor high risk. It merely crashed Firefox on any webpage which contained this 2 lines of JavaScript. In the end, I got the lowest grade in the class because the professor changed the rubric after no one else found anything. Finding this exploit went from 80% of the grade to 5%, and participation credit went from 10% to 70%.


Sounds like a pretty good simulation of the corporate workspace, though!

I'm sure it was not intentional at all, but it is pretty funny.

Complete with lower pay (or at least less career opportunities) for working extra hard on the Death March project about to be cancelled.


You name it! Most of the code I've written in my carrier was for projects eventually canceled.


Wow that sounds like a really frustrating and demotivating experience. Seems totally ridiculous and beyond any realistic expectations - I’m surprised nobody involved didn’t raise concerns.

I had a similar but not as bad experience. The lecturer was brand new and wanted us to design a programming language after a single vague PowerPoint and the worst introduction to yacc or lex or whatever they are. We were also undergraduates.

After several weeks of complaints he provided an example language and parser etc. Still it was simply too hard and in the end nobody could do it. I believe I managed to add a minor feature to his example language as my submission. I managed to get a reasonably high grade simply because I freely admitted I struggled with the entire task but could explain language theory and made comparisons with other languages I know and what features I would have added if I could.

Some people totally wrote that whole module off and planned around the scoring system where one failed module in a year would just be discounted and an averaged score created.


I guess it must be common. I had a very similar experience at university except without the fillip of lecturers who actually had programming skills.

Same gig though. Set a programming assignment that's way too hard for the class, because they hadn't been taught programming properly (in this case, implement a few simple graph algos so hardly Firefox 0-day level). Realize too late that nobody can pass the assignment and retroactively change the marking criteria to be entirely based on writeup. Result: as one of the only people in the class who actually submitted the full three working solutions, I got one of the lowest grades, because "You explained the algorithm in 'detailed code comments'? Do you think I have time to read student's code? It needed to be submitted in a Word document alongside the submission."

This was at a UK Russell Group university, supposedly one of the elite. IMO universities are jokes. They were crap at teaching decades ago and that was before they were overrun with radical ideological activism. They're simply good at hiding the truth because academics are treated like gods - nobody believes students who say their teachers are terrible at their own subjects.


I'd stick up for my CompSci at my "liberal arts college" education, but that was, like 25 years ago.

"University" typically means TAs the entire undergrad, right?


>Wow that sounds like a really frustrating and demotivating experience. Seems totally ridiculous and beyond any realistic expectations

What, the 3 hour compile time? That's an OS+utils, there are projects that take more... Ever tried compiling QT+KDE?


Try Yocto with Qt. Ugh. I have a 48-core server just for this task.


How are you building your Qt ?

I just tested on my machine (i7-6900k, 8C/16T from 6 years ago), building 5.15: qtbase, qtdeclarative, graphicaleffects,multimedia,quick controls2, serial port, svg, wayland and websockets takes 8 minutes total... it's hardly the end of the world and that's already more than needed for 99.9% of Qt apps. And I don't even use ccache.

    make -j16  5343,12s user 307,95s system 1198% cpu 7:51,67 total
here's my configure line:

    ~/libs/qt5/configure -nomake tests -nomake examples -debug -confirm-license -opensource -platform linux-clang -linker lld


Oh sure, building an isolated Qt instance is quick.

But adding the Yocto overhead (entire bootloader, kernel, and then fetch/decompress/build all the packages) takes a bit more time. And Qwebkit isn't trivial either.

Once Yocto has it all in sstate-cache then it's only like 15 minutes for a total rebuild.


yeah, if you need webkit that's much longer for sure.


Back in the SuSE 6.3 days that would take a whole night.


Old Gentoo user chiming in. My "I won't be using my computer today" builds were either major desktop environment (or anything that forced me to rebuild the GTK or QT shared libs), and... OpenOffice. Good lord, that one took forever. Firefox probably came next after those.


Oh man I was a Gentoo user once. That was funny and brought a lot of memories back. I was building KDE from source and it took 1.5 days to build I think lol.


A couple of tools to make compile farms easier to use:

https://distcc.github.io/ https://github.com/icecc/icecream



this whole thread reminds me of the blog where someone used AWS Lambda as a pseudo-distcc to compile LLVM in 90s: https://blog.nelhage.com/post/building-llvm-in-90s/


Sounds like my college where teachers expected us to use ML or blockchain in DBMS and Networks projects, just throwing darts so that they can turn something into a paper.


I'm kind of surprised there was no code caching. I compiled the linux kernel on an X200 in the last year or two and essentially there was a wait of 1 - 3 hours for a minor change. 3 hours if you build from scratch, 1 hour if you do a minor change and rebuild. It was excruciating and the machine ran hot, but I'd be surprised that it took 8 hours for a rebuild, after building the first time, unless you are literally destroying all of the build artifacts and rebuilding from scratch, every single time.


ccache.


I mean, sure, I guess you can do that. But a basic Makefile does this too. So does every other build system on earth.


You have completely missed the point of what ccache does and how it works.


A tool like ccache will work across multiple workspaces, unlike Makefile. You can prewarm it with an overnight cron job. And in some configurations, you can share it across the organization.


> You basically need a compiler farm for fuchsia. I tried compiling it once and it took 3 hours, only to find out to need to recompile to add anything

Not too bad for an OS and all the utilities and programs that it comes with. The Linux equivalent will be compiling Gentoo (probably a lot longer than 3 hours, even on good hardware).

In 2003, as part of a university assignment, I used a CORBA ORM called 'mico' (or 'micro', not too sure) written in glorious C++ with as much use of templates that the developers could use, and that took a full 4 days to compile on my aging 1998 laptop[1].

When I switched to an expensive 2003 desktop a few months later[2] the compilation of the same package with the same flags on the same OS (Slackware) took less than five hours.

It is amazing what speed improvements we saw between 1995 and 2005. Just between 1995 (when I bought my 486 desktop) and 2000 (when I was using a pentium pro or something better than that), machines had roughly doubled in performance at the same price.

I really doubt that machines from five years ago are going to be that much better than one from today.

[1] As a student, I counted myself lucky to even have a laptop of my own, instead of booking time at the university lab.

[2] The compilation experience convinced me to save for a few months to get something expensive with lots of RAM


To add to this: A much more common Linux equivalent to this is "making Yocto Linux recompile your OS image". Yocto is not the only option in this space, but perhaps the closest embedded Linux development has to a standard option.

Yocto is not entirely unlike Gentoo - recipes describe components of the system, which is built from source - but with the addition of a fairly sophisticated caching scheme, where the input to a recipe is hashed and the hash used to store the outcome, which is then reused in future builds (and the cache can be shared by different builders) unless the input changes. The other key feature of Yocto is that the system is composable in layers, where upper layers add, extend or override recipes from the lower layers. Layers can and are provided by different parties (e.g. BSP layers from HW vendors or their SW partners).

Yocto Linux is used by projects which need to be able to guarantee that they can bootstrap their OS build, customize the build (e.g. filter out use of certain licenses or use a specific toolchain) and to manage SW supply chain (the layer idea, and then in a business contact something like RASIC for the layers).

To sum it up, "rebuilding the entire Linux OS w/ caching" is absolutely the norm in embedded dev, as is having a compiler farm doing this hooked into your CI. Edge nodes like developer laptops then access the CI build farm's caches to make a local build bearable, with the caveat that you don't get incremental builds at the component level this way, so usually you still want to either dev separately against an SDK (or reuse the build root) or at least keep your components small and modular enough to not make it painful.

Automotive, network equipment (e.g. routers), stage/production equipment (mixers and other networked A/V gear, etc.) and many other parts of the SW world work this way.

Hacker News is mostly exposed to web development and desktop Linux/mobile app development, which are pretty different. Indeed, perhaps the most surprising thing is how different desktop Linux development is from embedded Linux development and how little cross-pollination between these communities is taking place.

If you develop your for desktop Linux, the distro on your box also serves as its SDK - you just install a bunch of -dev packages from your package manager and build against the host. Or perhaps a Docker image as build env in some cases. But you generally never rebuild/bootstrap the OS, which in embedded is the primary unit of work (with mitigations such as caching).

One side-effect of this is that embedded systems and desktop systems tend to approach updates/OTA differently. In the package/binary-based systems, OTAs come in via the package manager. Embedded systems historically tend to go for full system image updates with again some mitigations such as binary deltas, and then A/B partition update schemes. Or a partitioning of the update content that is orthogonal to the partitioning that goes into the OS image build. Lately there's a trend for seperating applications out into container images that get deployed seperately from the base OS image, and thoughts about containers that can move between embedded devices on the edge and the cloud infra in the back.


I'm not familiar with fuschia but those times are what I'd consider normal for an initial compilation of an operating system in regular consumer workstations.

I work on the android operating system and very rarely compile the whole thing from scratch in development environments. Incremental builds plus adb sync (think rsync between compiled artifacts in host and device) make it into a manageable workflow.

Even incrementally, it takes a few minutes to see your changes and that can be a source of frustration for newcomers who are used to instant feedback. Being productive requires making good decisions on how often and what to compile as well as strategies for filling up compilation time.


The last sentence is an eyebrow raiser


I work in videogames - making any change to code takes about 2-4 minutes to compile(huge C++ codebase with a billion templates, it actually takes about 40 minutes from scratch with distributed build, few hours without it), plus second that again for the game to start and test. And god forbid you made any changes to data - then it's about 20-30 minutes to binarize the data packs again and deploy them. Really makes you slow down and think any change through before trying it. The "pray and spray" approach to coding simply doesn't work here.


The compile times are part of the reason why scripting languages like Lua are popular in games. In engine code it's fine, but you don't want to wait minutes to see a change while working on higher-level parts of the game.

The data thing though, that sounds like your engine is poorly optimized for development time. There should be a way to work with data without repacking everything.


While I remember those days of C++ development - I never want to go back there. It's surprising people are willing to put up with this shit at this in this day and age. No wonder there's so much crunch time.


Even when I mostly do managed languages, I keep getting back to C++, why am I willing to put up with it?

It is the language those managed language runtimes are written on, the main language for GPGPU programming, two major compiler building frameworks, and works out of box in all OS SDKs while providing me a way not to deal with C flaws unless forced by third party libraries.

I could be spending my time creating libraries for other ecosystems, but I have better things to do with time.


I sometimes wonder if these sorts of observations provide some reason to make either embeddedable languages or NOT self-host a compiler for a new language, but continue to write the compiler in C or C++. This is a weak argument compared to self-hosting a language, but might be something to consider in passing.


Indeed, that is why many times a language isn't self hosted.

It wasn't because it was lacking in capabilities, rather the authors decided they would spend resources elsewhere.

Naturally it has the side effect to reinforce the use of the languages that are already being used.


C++ is fine. It's just nobody likes designing and writing lighter weight tests and simulations to keep local dev workflows productive.

For things like AAA games and OS development, I'm not convinced simply picking another language solves the problem. At least not while keeping all the same benefits. C builds faster, sure, but it doesn't have the same feature set.


I'm trying to ! https://youtu.be/fMQvsqTDm3k?t=86

I'm getting 150 ms of iteration time on small cases, 200-300 on average ones


We do 0 crunch or overtime as a company policy, but yeah, it does slow down things a bit.

>>It's surprising people are willing to put up with this shit at this in this day and age

All major APIs, the SDKs for PS5/Xbox are all only provided in C++ so it's almost a necessity. Same reason why we all use Windows - the platform tools are only provided for windows.


How are you building your code ? Here's my iteration time when working on the main codebase I'm on, https://github.com/ossia/score which is very template-heavy, uses boost, C++20 and a metric ton of header-only libraries: first on some semi-heavy file and then on a simple one https://streamable.com/hfkbzg


I'm actually curious why they bother converting game data at all into binary formats, before production builds. The logic is all there to load the data and you already have the raw data local; I would assume it takes more logic to unpack it and would seem faster to just use the raw data.


The data before packing and the data after unpacking are probably not the same formats.

Some of the packing steps are also probably lossy (eg. Take this super high poly count model, and cut away 99% of the polygons). If you skip that culling step, the game probably won't run.


Yes, that's exactly what it is. The "data" might be 8K textures, as part of the binarization it gets converted into a format that the client can understand, but also follows a "recipe" for the texture set throught the editor(so convert it a 2K texture with a specific colour spec). Same for models etc.

And yes, the client itself usually can't read the raw data, and even if it could there is not enough ram on consoles to load everything as-is. The workstations we use for development all have 128GB of ram just so you can load the map and all models in-editor.


Have you tried ccache now that it has MSVC support?


We use clang for all configurations and all platforms, and we use FastBuild(we also used to pay for Incredibuild but it wasn't actually any faster than FastBuild in our testing).


Yeah, I mentioned it because I see peers doing different things to be productive during compilation times while newcomers will stare at compiler output. Some will jump to writing documentation, take care of issue management, work on some other ticket entirely, etc.


I'm vastly oversimplifying the issue (also I'm not a doctor), but didn't studies show that this type of multitasking is bad for our mental health and increases the likelihood of burnout?


Not a doctor either, just going on articles I've read on this. The sort of multitasking that causes those problems is when both tasks need frequent attention. If you can essentially leave a task alone for several hours, with some sort of notification if there's a problem, that's fine. Even things that don't take much attention but are ongoing tasks, like doing the ironing, are fine depending what else you're combining it with.


This resonates. I usually wrap my long running commands in something that sends a push notification when they finish so that I don't jump around seeing if things completed or failed. I find the distraction of the push notification less disrupting than continually checking for completeness.


osx comes with a command "say" which is a text to speech tool. I'd do things like make && say "build complete" || say "compile failed" with different voices I thought were funny. generally worked great.

One day, I stepped away and had a particularly intimidating voice say "your build has failed" and apparently knocked out my headphones. I came back just in time to hear that, and see a couple coworkers jump at the sound.

After that, I was much more consistent at disabling sound when I stepped away. I got a little teasing about that day, but generally it worked great.


This is possible in Linux as well. The program is called speech-ng IIRC.


espeak-ng / espeak. But notify-send(1) mitigates the potential for scaring your coworkers.


I use espeak and it works on Termux too.


I used a similar approach before working mostly remote, now I just curl to pushbullet to get the notification wherever I might be working.


This is actually a great idea. I've used from-cli-notifications for things that I knew would take hours, but for coding related stuff I always think "it's not that long." So, I would immediately go into "active attention multi-tasking" and then recheck if the compilation tab finished, and then decide from there where to shift my active attention, without giving it some breathing room.

Lately I've actively tried not to do that (it's a hard habit to break, and I still feel guilt sometimes), though.


This makes sense about moving your active attention to different object(ive)s. I think I slowly stopped doing "active" multitasking and switched to this low-attention multi-tasking once my burn-out got worse. FWIW: I don't think multi-tasking on its own caused the burn-out either, it was a combination of a few things probably.


It's certainly possible, I honestly wouldn't know. Anedoctaly, I find it worse on my mental health to just sit around waiting for things to finish.


fast cycles are liberating, because it's easy to just ask the compiler or your testing harness if you're doing the right thing. I can't speak for the parent, but in my experience, typing isn't the hard part.

With slower cycles, I think more about how much to try before submitting work. Some times I feel comfortable pounding out quite a bit of code. Other times, I know there's some subtlety so I need to double check things. I don't want to stumble on forgetting a const declaration, or something silly like that. Iterations are slower, but you can spend time in flow thinking harder about each loop.

Although, sometimes, I do just stare at the console waiting for feedback. That's usually a good time to go to the bathroom and maybe grab a snack.

Not necessarily multitasking. Just being careful about what plates are spinning, and which I can set down or pick up between steps.


> fast cycles are liberating, because it's easy to just ask the compiler or your testing harness if you're doing the right

Type checking is still a very quick part of compile, so it can still support a "fast cycle" workflow if most simple errors are detected via incompatible types. You just need to go all-in on type-driven development, rather than simple reliance on unit tests.


ah, I alluded to that with "ask the compiler".

but yeah, types are great. Quickcheck is great too. But you'll have to pry my oracles from my cold dead hands. Computers show me over and over how stupid I am. Yeah, if I have a regression, I'm adding a specific test for that.


> newcomers will stare at compiler output.

I know at least for me, I sit in quiet contemplation and think while the project is compiling. I expect the context switch of going between writing docs and writing code on every compile would be too much for my brain. Is it managers making you feel like you need to be doing something else while your code is compiling? I guess I just never felt like I was being unproductive while my code was compiling.


I just browse HackerNews.


I need to constantly recompile my work app, and it needs a few hours. the last 2 months I did nothing else than trying to compile this app in various configurations and dependency versions to find a state which works. there was none. commercial Windows shit of course, no ccache, and most of the time McAfee antivirus interferes.

just 3 hours would be very nice. clasp also needs about a day.


Solution: dont run antivirus on workstation, or at the very least exclude your dev folders from scanning. Alot of your ”compile time” is antivirus eating cpu on compile artifacts.


I think the charitable way to read their comment is that they're aware the AV is a problem but can't do anything about it because it is a work machine and thus likely administrated by a whole other team.


Even in that case it's worth talking to the security team about it.


They may have done. I've worked at plenty of places where the corporate machine was so locked down and corporate IT was so far removed from development that I ended up using two machines: a corporate one with AD credentials and a BOYD MacBook Pro. Some places eventually allowed me to run inTune on the MBP to access some corporate services but others haven't. And I know of places that have flat out refused engineers to BOYD.


the av ticket is only a few months old yet. in this env a typical IT ticket needs 6 months. esp. when you are not allowed to talk to anyone. a windows shop.


That's not the responsibility of any open source project to solve though. If the AV slows down productivity then it's up to office politics to decide whether the perceived security/compliance is more important than developer time or whether they should talk to their AV vendor to optimize scanning or whatever.

And I think this is mostly a windows problem with the synchronous filter drivers? On linux you can hook filesystem accesses asynchronously.


> That's not the responsibility of any open source project to solve though.

Nobody said it was.

> If the AV slows down productivity then it's up to office politics to decide whether the perceived security/compliance is more important than developer time or whether they should talk to their AV vendor to optimize scanning or whatever.

I agree and everyone here is assuming the GP hasn't already tried that. Sometimes, and particularly in enterprise orgs, making minor quality of life improvements for developers is an impossible task. Sometimes you don't realize just how these places can operate until you've worked in one.

> And I think this is mostly a windows problem with the synchronous filter drivers? On linux you can hook filesystem accesses asynchronously.

Yeah it's very specifically a Windows issue. Windows IO is a lot more event driven from what I understand which makes Linux faster at randomly accessing files but virus scanners more effective (not that I have a particularly high opinion for them to begin with) on Windows.

Unfortunately, Windows is what 99% of enterprise orgs IT teams provision for their staff.


Even those that happen to use macOS and GNU/Linux might face the fun of having a AV running, because of IT policies.

Those enterprise AV have macOS, GNU/Linux and even iOS/Android versions for a reason.


Yep, AV vendors and various certifications and the security consultancy industry have created a "something must be done, this is somethign, thereferore it must be done" situation profiting from security problems that feeds on orgs that have fallen into the pattern of uncritically doing what everyone else is doing in the industry.


Maybe reboot into Linux and use cross-compilation with ccache/distcc/etc to speed it up :)


I'd suggest something like building from Gitlab CI jobs with Runners running on dedicated VM or spinning temporary Windows containers. Of course that might be shallow as I don't know the complexities of your project.


add another ssd, or create new partition, and disable all antivirus on it, and put your code there.


ha, nice idea. of course forbidden. this is high security code.


What does it compile? If it does stuff like compiling LLVM from scratch I can understand that.


Have you tried compiling Linux before? Imagine trying to compile MacOS or Windows. I'd say 3 hours is pretty darn good.


Linux does not take 3 hours even on spinning rust. I'd say one hour at worst. On my computer with the compile happening on an NVMe, it takes about 15 minutes.


Because you're conflating kernel compile times with an entire OS? Linux userspace compilation can take 6+ hours very easily (try yourself with Gentoo).

At Microsoft, our massive servers churn out nightly Windows image overnight usually 5pm-10am next morning.


The Linux kernel takes 15 minutes. Compiling all of Linux (aka the equivalent of Gentoo emerge'ing e.g. gnome-desktop from whole cloth, which is what this is), would not take 15 minutes.


But do you have to emerge the entirety of your system because you added a browser? Since that seems to be how Fuchsia's compile times are working.


But that's just the kernel. This is more akin to compiling the kernel plus all packages and applications that make up the operating system.


I clearly rememeber leaving my computer to build during the night, specially when I was crazy enough to experiement with Gentoo.


It was taking three hours in 1998. Not anymore.

10 years ago, you would be able to compile your whole BSD kernel and userland (plus KDE) in a week or so, while just checking occasionally whether it's completed or not. This was on a dual core, off the shelf desktop computer.


The GP was talking about whole OS, not just the kernel (just as this Fuchsia workstation is too).

A week also seems pretty long for BSD. I'm sure I spun up FreeBSD with Gnome 2 in ~24hours once (it was definitely less than a week because I had a weekend to prepare it for a house party). Though admittedly I didn't compile base. But this was a machine from around 2005 sort of era, maybe even earlier.


I'm not a BSD person. A colleague did it back in the day. This is why I also noted "done leisurely, and without any effort to do it as fast as it's possible", since he said it he compiled it that way, without rush.


Compiling Linux takes under 3 minutes on my workstation. 3 hours is far from pretty darn good.


Kernel only


A full buildroot (incl. Kernel and full toolchain) compile is not much more than that even on my laptop, when using ccache.


Just rechecked because I'm getting down voted. Kernel is 90 seconds, full toolchain and xfce environment is 6 minutes


The default configuration took me 2 minutes and 4 seconds with a 3700X (8 cores).


With the entire base userland, X, Firefox, Gnome etc? Impressive.


We were talking about building Linux, not building Linux + programs to run on it.


That seems excessive. I wonder what they're compiling. Probably building LLVM, cpython, nodejs, etc.


It's a shame fuchsia is so hard to spell. 25% of the references to "fuchsia" so far in this thread spell it incorrectly.


It helps to know that it was named for Leonhart Fuchs. Just remember it is Fuchs-ia.


"Google fucks-ya"? has a nice ring to it.


more like "fewkssia"


To get on top of your fear of the new (and other insecurities), this is the video to watch: https://youtu.be/gIT1ISCioDY.

No need for the habitual google defamation exercise here.


Since nobody asked yet; Why explicitly Intel NUC instead of general x64?

I can only guess that it's because of HW drivers?


They also support an ARM dev board, the Khadas VIM3: https://fuchsia.dev/fuchsia-src/development/hardware/khadas-... , though with the "core" setup and not "workstation".


Fuchsia will run on a surprisingly large number of x64 machines but because it's not tested on a wide array of hardware, it's hard to recommend you try it outside of the narrower set of hardware where it is tested.


Yes - drivers, and the NUC is a consistent target that's readily available almost anywhere in the world.


Anybody know what the license of the source code of Fuchsia is? I could not easily find it on the website.


It's a mix of MIT, BSD and Apache


Does it matter? Google can close source an internal pre-release fork and then make a final set of modifications that will sufficiently deviate it for production releases.

Isn't the entire business reason behind Fuchsia that Android is a bit "too open source" and they want more control?

They are probably going to get some free bugfixes from "enthusiasts" and then close source a sufficient amount of it for "security reasons". Or whatever serves their "not evil" purposes.

But at least SOME of the code is in the open.


> Does it matter?

I mean... yeah, it does?

There's good reason to be suspicious in general, but if any time they do something non-evil the immediate reaction is "okay but this is probably actually evil in some convoluted way", then that suspicion becomes paranoia.

Even if they put out closed-source modules like they do with Chrome, having an open-source microkernel with a focus on sandboxing and security, with the resources of a major corporation behind it, is incredibly beneficial for the ecosystem.


If you have read beyond the title, I don't think this is what you think it is. (At least not yet anyway), since it is still not ready yet as a general purpose workstation.

It's more like building it, rather than using it here.


It's not viable as a workstation, but it's like an officially supported playground for enthusiasts.

"Release early, release often".


Just the intent to build it is newsworthy.


Would there be a growing job market for people who master the Fuschia ecosystem now?

Or is it basically just for Google employees working on Google devices?


I think the idea of a proper production grade Fuschia workstation that you could use as a daily driver is still a few years away realistically.

However, I think when that happens a lot of interesting opportunities will I inevitably open up because it provides what I think is an extremely compelling story for a lot of people you might not initially anticipate. For example, from a security and just general maintainability point of view I would expect it to become the new obvious choice inside of organisations for those who aren’t still tied to Windows specific desktop applications.


I'm not sure people realize just how much money Google has thus far sunk into Fuchsia for (AFAICT) very little in return. It was in the billions... years ago.

I don't know if this is still the case but the mission was absolutely to replace Android. Many at Google believe Android to be unsalvageable on two fronts: drivers and the general ecosystem.

It has been pitched internally at various products and (AFAIK) only found traction thus far on Nest hubs. It was heavily pushed for Chromecast but I guess nothing came of that.

I'm not an OS expert by any means but I am unconvinced about the viability of a microkernel architecture beyond theory. Security is an often-cited issue with context switching between user and kernel space.


Project Treble is an effort to separate driver and Linux kernel. And later Android added mechanism to upgrade kernel from play service. I guess Google is resourceful enough to hedging on different approach to solve a similar problem.

However, the crux of the problem is the Android manufacturer. You can't solve a human and incentive problem with a technical solution. Project Treble makes manufacturer easier to support their device, but they still don't motivate to do it. Google needs a tighter control on what device manufacturer should release.

Android One is a good approach, only qualify certain devices to be include for meeting certain criteria. They could actually create a watermark on unsecured device for not updating Android version, just like chromium do it to website not using https. I think this is more effective than creating a new OS just to let the manufacturer being lazy.


I bought two Android One devices... I'm afraid that the program is now dead:

https://www.android.com/one/

Their "Explore our latest phones" section shows 2 years old devices :/


Android One is indeed dead.

There are no incentives for the manufacturers to make these devices, as customers don't care and don't want to pay premium for them, while the manufacturers can get money from adding crapware to their phones.


Nokia is still releasing new models (oh God, so many new models), and I think they're all on Android One. But indeed the Android One website not being updated with new ones is a bad sign.


Nokia branded phones are one of the best update experiences on Android.


Not quite, treble is more about separating hardware specifics in a versioned, backwards compatible way. It's more for replacing the previous HAL system which in itself already abstracted the kernel for driver support (not necessarily just the kernel because modern platforms run things like audio, sensors, telephony outside of the kernel purview in specialized dsps or secondary mcus).


> Many at Google believe Android to be unsalvageable on two fronts: drivers and the general ecosystem.

Could you elaborate? I'm simply curious why people would feel that way about Android. By the ecosystem, do you mean the fact that apps are primarily Java? That they're poorly made? What makes the drivers unsalvageable? Proprietary blobs interacting with the Linux kernel? Something else?

It seems like a new kernel would have minimal drivers, but maybe it would offer a better way of having proprietary drivers? Linux doesn't have a stable device driver ABI so maybe that is making it hard for Google to update Android without the phone manufacturer being involved in the updating?

I'm just curious what people see as the problems with Android that would require a new OS to fix. For example, if the problem is poorly made apps, that problem will follow you to a new OS if you let them publish for the new OS - and if you truly don't want them, you could remove them from the Play Store.


The drivers part is easier to answer. Where Linux started drivers were compiled into the kernel. A few years later modules came along to dynamically load drivers (and other things). But modules could still (and can still) crash the kernel and the drivers don't necessarily have a particularly stable interface.

Compare this to Windows. Drivers were once the bane of Windows and a huge problem for stability. Microsoft invested heavily in this and it seems to have worked. The Windows driver interface is I believe stable and has been for many years at this point and drivers themselves are more insulated from the kernel. Not completely but it's better in many aays.

Android really has its hands tied here by what Linux does. You may agree or disagree with that but the above is a view held by influential people within Fuchsia.

The ecosystem is really around the process by which updates are released. This is a tedious process of maintaining many Android trees and updating them with new Android releases. These need to be patched onto the existing trees or the changes made to the existing trees need to be ported onto the new Android tree. This is a nontrivial task either way. It's why Android phones get limited updates and those updates can lag behind the official release by months or even years.

This isn't a situation Google likes so some want to make the manufacturer responsible for less, most notably the drivers.

Remember that Samsung is motivated to sell handsets, not update existing handsets.


Samsung might not be directly motivated to make updates, but it’s now a selling point and consumers care about how many years of updates they’ll get. There’s no way to skip on it without hurting consumer trust and long term brand value.


Virtually no one outside of tech forums considers years of updates as a factor for which smartphone they pick. Most people, at least in the west, get a new subsidized phone on a contract every couple years. And if anything, they hate updates because stuff they're used to suddenly doesn't work like how they expect any more.

I've never heard anyone in real life say "I bought a Samsung phone last time and it was good but it didn't get any updates after 3 years so I'm never buying one again."


I have no idea how many people consider it, but Samsung made a comitment to offer 4 years of software support, and flagships one year more [1].

Few years ago it was not like that, I don't remember there were official promises and you typically got a year or two of updates. Until recently, long support was always mentioned as a big plus for iPhones.

Now every review I look mentions years of support as a big thing. I also see quite often on forums "I still have year of updates for my device, I'll skip this year's model" so people do think about it.

It's possible only techies look at these things (and read reviews and online discussions), but even so it's still better than it was.

[1] https://www.gsmarena.com/samsung_pledges_4_os_updates_5_year...


↑ This. I got a Nokia Android One phone for this reason. I need security updates since I do banking and credit card transactions on my phone


> But modules could still (and can still) crash the kernel and the drivers don't necessarily have a particularly stable interface.

> Android really has its hands tied here by what Linux does. You may agree or disagree with that but the above is a view held by influential people within Fuchsia.

I don't really think driver robustness is an issue here. The number of phones I've seen, used or heard about with driver problems is zero.

Add in the fact that phones are almost never rebooted (compared to desktops/laptops and other computers), and the driver situation looks incredibly stable.


Manufacturers keep stats of the number of kernel panics per year.

I was disappointed to find that on many models of phone, there was not a single active user who had not experienced at least one random reboot caused by a kernel panic a year after purchase.

The fact it boots straight back up to the lockscreen means many users will never notice that it's happened though.


> I was disappointed to find that on many models of phone, there was not a single active user who had not experienced at least one random reboot caused by a kernel panic a year after purchase.

The actual stats here will help; across a few million devices, random reboots unrelated to software or drivers will be expected.

You're saying that, one some models, the device never crashed. On others, there was at least one crash. That's at least a 0.00001% crash rate. That's unbelievably good. At that rate it's probably a hardware error, not a driver error.

If you used a different threshold we'd have a better idea - how many models have a crash rate of 10%? 5%?

> The fact it boots straight back up to the lockscreen means many users will never notice that it's happened though.

That provides further evidence that the money being poured into a new kernel to alleviate a problem that happens so rarely it can't be distinguished from statistical noise, and when it does happen, is almost never noticed by the user, is money being wasted.

The return here is not proportionate to the problem being experienced, and the solution is definitely not proportionate to the money being spent.


> You're saying that, one some models, the device never crashed

For at least one user, yes.

But there are many users who maybe only turn the phone on 5 minutes a day to check messages - so it isn't really odd for a few users to never see a crash.


> But there are many users who maybe only turn the phone on 5 minutes a day to check messages - so it isn't really odd for a few users to never see a crash.

I don't really know anyone who uses their smartphone for less than 5m per day, but that's irrelevant anyway.

The question is still: is the money being put into preventing this problem at all proportionate to the size of the problem?

If there's a problem that no one notices[1], is a solution really worth spending millions of dollars per year over five years?

[1] So few users complained about this it's not even a statistical rounding error.


I would bet money that it’s _usually_ a very difficult to reproduce software error.

edit: sure, bit flips will happen at scale, but they’re not as common as bad threaded programming.


Huh? Are you saying that in Windows, drivers don’t run in kernel space?


I think they were talking about the various driver frameworks in Windows (WDM, KMDF, UMDF, NDIS and the various miniport drivers) having stable/well-defined interfaces and support libraries.

But also yes some drivers don't run in kernel space! https://docs.microsoft.com/en-us/windows-hardware/drivers/wd...


I believe the main problem is that Windows goes to reasonably extremely lengths (it's not perfect, but it's fairly good) to not break the driver interface, so if someone makes a driver for something, then semi-abandons it, the driver will continue to work for many years.

This is often clear in graphics cards, the main problem is old cards don't get support for newer versions of DirectX, but otherwise continue to work.

Linux explictly doesn't support this -- that means when some hardware manufacturer makes a closed source driver, it's very hard to update the kernel. Unfortunately it seems many bits of important phone hardware (like stuff Qualcomm makes) only have closed source drivers.


Some kinds of drivers (e.g. graphics) run at least partially in userspace.


Yeah, but it’s not the norm.

Don’t get me wrong. Windows did a lot of work on driver stability. But I think cletus is too narrowly focused, and misunderstanding the context around Android and Fuschia.

Fuschia has been described as a replacement for both ChromeOS and Android, with a goal of solving a range of problems (not just driver compatibility) and unifying the target development platform (beyond the somewhat-limited Android-for-ChromeOS options that exist today).

It does not seem to me that it’s primarily about fixing Android driver compat, which could probably be done in simpler ways (and, as others have noted, Treble sort of kind of exists to mitigate).

(I also was amused that Cletus claimed that Windows drivers run in userspace—generally not true—while claiming that there are no real-world microkernel OSes.)


> Remember that Samsung is motivated to sell handsets, not update existing handsets.

Even though this is true, they still offer the longest updates.


> What makes the drivers unsalvageable? Proprietary blobs interacting with the Linux kernel?

Most drivers in the embedded world come from the BSPs (board support packages) of the SoC vendor. And holy hell wherever you look at leaked source code (cough Mediatek) it's madness. Nothing is upstreamed because the quality is so shoddy that you'd get Linus Torvalds into a proper ragepost if you dared to post it for submission, and as a result Android and most of embedded Linux is stuck at outright fossilized kernels where it's extremely hard to backport anything from newer kernels - which is also the reason why so few Android phones are supported for longer than two or three years after initial release. The SoC vendors simply won't provide BSP updates because of the effort involved.

And forget about copyright requiring vendors to open-source everything in Linux kernel space... most simply do not care and there is absolutely no will of enforcing it on a large scale - to the contrary, people trying to do so got booted off [1].

The proprietary blobs for components like WiFi/BT/GPU are only the icing on the cake. Barely any effort made to ensure it actually holds up in real world usage, only enough to pass qualification testing. IIRC Apple had at least one RF vendor make a completely new firmware for their chips because the official one was riddled with bugs.

[1]: https://sfconservancy.org/blog/2016/jul/19/patrick-mchardy-g...


Patrick McHardy wasn't interested in GPL compliance and source code release, only in extracting money from non-compliant companies.

Software Freedom Conservancy's approach on the other hand is to require compliance. Through their lawsuit against Vizio they are also aiming to make it possible for any user of GPLed but non-compliant software to sue for compliance. Hopefully this will change the amount of spontaneous GPL compliance for Linux in the industry.

https://sfconservancy.org/copyleft-compliance/vizio.html https://sfconservancy.org/copyleft-compliance/principles.htm...

Got a link for that Apple firmware thing? Sounds interesting. We really need open firmware for hardware devices. There is some for very old hardware though.

https://wiki.debian.org/Firmware/Open


> Patrick McHardy wasn't interested in GPL compliance and source code release, only in extracting money from non-compliant companies.

So what? If it makes them comply then I'm fine with it. No one else has taken on the fight except for a handful of large companies (Vizio and a settop-box-manufacturer whose name I forgot are the only cases I remember). Everyone else, particularly in the phone and embedded sector, outright shits on the GPL for decades now.

Particularly Google would be in an excellent position - they possess the Android trademark, they could require manufacturers and SoC vendors to properly open-source their stuff as part of the Android license, even for AOSP-powered devices.

Governments could also step in, similarly to enforcement of patents, but no government in the world has done anything to establish legal protection for open source that does not rely on open-source authors filing lawsuits on their own! Just imagine customs going on a trade fair and confiscating every device where the GPL and other OSS licenses are not obliged. After two rounds, no one would dare mess around any longer.

> Got a link for that Apple firmware thing?

Unfortunately not, that was many years ago and I might be remembering it wrong :(


AFAIK Patrick didn't even get compliance at all, just shady tactics to shakedown companies for money. The only way his actions resulted in compliance is if companies were scared after hearing about them and did their own compliance work, but that seems unlikely.

Conservancy are doing the Vizio lawsuit, do a lot of behind-the-scenes work, supported the VMware lawsuit, did a lawsuit against Best Buy/Samsung/Westinghouse/JVC and did one of the earliest compliance actions against Linksys resulting in the OpenWRT project:

https://sfconservancy.org/copyleft-compliance/past-lawsuits.... https://sfconservancy.org/copyleft-compliance/enforcement-st...

Harald Welte of gpl-violations.org did a lot of GPL compliance actions in Germany, admittedly none recently.

It would be great if Google could do GPL compliance actions against Android vendors, but they seem to be moving away from GPL projects instead of embracing them, so that seems very unlikely.

There was already a case where a Linux copyright holder got the USA customs department to withhold import of some GPL violating tablets. I think that eventually resulted in compliance, the details are somewhere on LWN, I forget the URL though.

I think if the Vizio lawsuit is concluded in favor of Conservancy, that is probably the best chance for widespread GPL compliance.


> So what? If it makes them comply then I'm fine with it.

An infringement lawsuit can't make anyone comply because a court will only ever order an infringer to pay monetary damages. An infringer might settle a lawsuit by agreeing to comply with GPL terms instead of paying damages, but that's not something a court would order.

Theoretically, if the GPL were a contract (an exchange of promises) and not a license (a grant subject to conditions), then a court could order a party in breach to comply with its terms. A court would only do that, however, if monetary damages were insufficient to compensate the non-breaching party.


Courts can also deliver injunctions, "specific performance" and similar non-financial remedies, not just monetary remedies.

https://en.wikipedia.org/wiki/Legal_remedy

The Conservancy lawsuit against Vizio contends that the GPL is also a contract and that downstream users are third-party beneficiaries of that contract, so they should be able to sue for compliance with the contract. Its a really interesting approach and I hope they win. I encourage you to read the court documents, they make for interesting reading.

https://sfconservancy.org/copyleft-compliance/vizio.html

Also, monetary damages are pretty much always insufficient in GPL cases, since money doesn't get you source code. I guess if the money was enough to pay for reverse engineering and reimplementation of the fixes/features present in the non-compliant codebase, then they could be enough.


> I guess if the money was enough to pay for reverse engineering and reimplementation of the fixes/features present...

Exactly.


> Nothing is upstreamed because the quality is so shoddy that you'd get Linus Torvalds into a proper ragepost if you dared to post it for submission

True, but the community does clean up and upstream stuff over time. It's just very slow work, made even more confusing by how much SoC hardware components are duplicated with small variations in different platforms. De-duplicating and merging drivers for some random IP block can involve pretty serious effort


Linux does not have a stable driver ABI or API.


Isn't fuchsia a microkernel?


As an outsider, I'd like big corps to spend billions on research projects or technologies like this than social networks like Google Plus. I feel a lot more good can come out of these projects even if the projects themselves fail.


Generally speaking the hybrid microkernel is the architecture both macOS and Windows went width, where some drivers still run in ring 0, but have some level of isolation between the core OS stuff, and driver stuff.

For example, this allows Windows to restart a crashed GPU driver.

Windows (and I think macOS as well) also has a stable kernel ABI/API for drivers, a fact that's probably appreciated by the hardware vendors.

The bad name attached to microkernels I think stems from the debate between Tanenbaum and Linus in the 90s, where Tanenbaum's microkernel was sorely lacking in performance, mostly owning to the incredibly resource intensive context switches on the CPUs of the time.

However, on newer CPUs, this cost is much less significant as CPUs are better optimized for this, making it worthwhile to revisit this issue.


> unconvinced about the viability of a microkernel architecture beyond theory

QNX is an example of a real-world microkernel OS that Blackberry purchased for use in mobile phones. One could argue it had superior peformance to Linux, not because of throughput, but due to its real-time guarantees.


Not sure I understand the relationship between real-time guarantees and performance.


Performance: Computing pi to a billion as quickly as possible

Real time guarantee: When a hook for an event is called it transfers all computation to that call within a time limit predictably and deterministically

Please correct me if I am wrong.


One of the core characteristics of a real time operating system is that it has to service events within guaranteed time.


It has to start servicing events within a guaranteed time, but completing them is another matter. If you have multiple tasks running in parallel and a lot of incoming events, constantly switching to service them will delay completion of your ongoing tasks wrecking their performance, and eventually the scheduler will get overloaded.

So it depends on the cost of context switching, which is typically very high, and the frequency of context switches. Real time OSes are really important for applications where you have lots of lightweight tasks that need to be handled very promptly. If you have long running tasks (long running in CPU world, which can be fractions of a second) constantly getting interrupted and context switched can brutalise performance.


I have a degree in the subject matter and I also run QNX so I know how it works and am well aware of what the trade-offs are, thanks. Before I even graduated from the university, I used to write lots of code which ran inside of interrupt requests on the Commodore 64 and Commodore Amiga where finishing before the next vertical blank was critical, so you could say that I've dealt with the practical even before I graduated in the theoretical. Not only did the code have to respond within the next vertical blank, ALL of it had to finish by the end of it, else I would have been the laughing stock of the scene. And before you cut in again, yes it was computationally expensive, therefore the code had to be really, really fast to get it all computed in time.

When I write something here, I've already done it, and often on multiple occasions in multiple scenarios at multiple companies, therefore I don't need a lesson.


I’m sorry if I offended you, but the question was about performance and event response is only one aspect of it, so I thought I’d expand on that for the benefit of anyone reading the thread.


Fine. Apology accepted.


Real time response/latency and performance/throughout are usually a tradeoff - you can’t have both.


Only if you don't consider metrics like tail latency to be performance metrics.


The idea that microkernels will have insufficient security due to additional context switching is a concern I’ve not heard before. Care to elaborate?


I was looking for a Linus quote I saw on this but can't seem to find it. It's not specifically about security but Linus said something about translation between user and kernel space was an absolute lower bound on performance and you're simply going to do more of that in a microkernel architecture.

Think about it this way: if the TCP stack is in user space then how does another process talk to it to send a packet? Is it directly? Well that has issues. If it's via the kernel then you're already translating to the kernel and then out to user space.

If this is an issue, will the temptation be to make some performance-related bypasses?

Linus of course favours monolithic kernels [1] so consider that a disclaimer.

[1]: https://en.wikipedia.org/wiki/Tanenbaum%E2%80%93Torvalds_deb...


Someone should kill that link, it is no longer a relevant argument. Linus was mostly talking out of his ass complaining about microkernels. His main gripes were specific to Minix and Mach which were popular academic projects at the time.

There are many ways to make microkernels nearly as fast as monokernels, while being much safer and modular with all advantages that entails. See QNX or Sel4 for modern implementations.

Meanwhile, Linux itself has been getting more and more microkernel-y with things like eBPF which because of the monolithic design make the whole kernel much more vulnerable than warranted.


Huh? It’s widely said that microkernels trade performance for security. You seemed earlier to be saying that running core functionality in userspace implies a degradation in security. Typo?


> Think about it this way: if the TCP stack is in user space then how does another process talk to it to send a packet?

In a microkernel system, if a process holds a handle to a TCP stack then it can make calls like create_socket() and then use a socket. For better performance such calls can be executed without visiting the kernel, via call gates. If a process doesn't have a TCP handle, then it cannot access the network which is the best choice for most applications. And of course a privileged enough process can tap into other processes' inter-process calls and see what they are sending and receiving, or even intercept those calls and modify requests and responses, for example, to emulate a network card or to act as a firewall.

In legacy systems like Linux or Windows every process has excess privileges - for example, access to network, to filesystem, to information about other processes, to shared memory, to hardware serial numbers. Linux has several hundreds system calls and they are available to any process. In a microkernel system I am describing the process would have the minimal privileges necessary for its job. For example, a TCP daemon would only have a handle to a router daemon (that redirects packets to a network interface) and no access to filesystem, information about kernel, and so on. Even if this daemon had a vulnerability it wouldn't be of much use for an attacker.


There are user mode TCP stack. It is doable with user mode IPC and IOMMU


Most OS targeted to embedded space are microkernel based, while Windows and macOS have long been microkernel inspired with plenty of OS subsystems running on userspace.

UNIX FOSS clones abhor microkernels, and then run containers everywhere with a monolithic kernel stuck accessing the real hardware via a type 1 hypervisor.

It isn´t a theory for about several years already.


> much money Google has thus far sunk into Fuchsia for (AFAICT) very little in return. It was in the billions... years ago.

Citation needed


The average cost of a Google engineer is at least $500k/year. This includes all direct compensation as well as amortized office costs, perks, meals and so on. I have no official figure on this but the average total comp can be well-established from levels.fyi as well as my own experience (disclaimer: Xoogler).

1000 engineers will give you a burn rate of $500m/year. I guarantee you the head count associated with Fuchsia is higher than this, probably much higher.

Again, I have no official information on the current resource allocation but you can figure these things out by, for example, looking at the leadership structure. At a company like Google, certain positions will indicate head counts. An engineering director probably averages ~100 engineers rolled up through 2-3 layers of managers. A VP means 200-500.

Familiarity with how Google staffs projects, how much Fuchsia was staffed while I was still there and the costs innvolved gets you easily into the billions of dollars over 5+ years.


https://techcrunch.com/2018/07/19/one-day-googles-fuchsia-os... says, as of 2018, “about 100” people work on Fuschia.

I find your “much higher than 1000” estimate a bit surprising.


i dont think L4 and below costs $500K? $170 base +$30 bonus+ 100 in stock.. i don't think other benefits add 200 more?


Their health insurance package is extremely expensive and a huge amount of money goes into supporting campuses with buses for commuters, full meals, laundry service, and so on.


I wouldn't call health insurance "extremely expensive" in the context of Google SWE salaries. On average it probably costs on the order of $5-$10k/head/year at most. My very loose upper bound estimate is $50k/year for all aux benefits and bus/campus costs.


I'd be willing to bet something like Fuchsia is on average L5-6 (or more) given retirement projects at Google attract some very senior engineers that would offset the L3/4s if you amortize them.


nit: Stock grants/vests are not a cost to the company. They just create new shares and give them out.


They dilute shareholder value, and if you consider that many tech companies are also doing stock buybacks, it seems plausible to consider the net effect of (Handing out "free stock" with one hand) + (Buying back stock with the other hand) to be spending money.


It has been pitched internally at various products and (AFAIK) only found traction thus far on Nest hubs.

Since my Nest Hub received the update to Fuchsia it's been generally more laggy and unresponsive. Occasionally it needs a power cycle. I wish there was a way to downgrade to whatever it had pre-Fuchsia :(


> with context switching between user and kernel space.

This is an issue only because today CPUs are optimized for legacy monolith kernels which rarely switch and not for microkernels.

Microkernels are very important because they allow to reduce privileged code size and move most things out of kernel. Legacy kernels like Linux or Windows are just an endless source of vulnerabilities, use legacy programming languages and have zero security. Linux doesn't even use static code analysis.

For example, in Linux a network card driver runs at kernel with maximum privileges. In a microkernel it would be a separate process without access to anything in the system (even to a file system) so exploiting it wouldn't give much to attacker.


No it's not. It's an issue because address translation is expensive, involving page table walks that require complex access to large in-memory data structures. CPUs cache the results but address space switches blow them away, by the nature of what they are.

Modern CPUs (well Intel CPUs at least) have lots of features to try and mitigate these costs but they aren't widely used. For instance memory protection keys.


Modern CPUs support large pages that cut off some part of the page walk. Linux supports them transparently, so if you set up a bunch of threads sharing a single large address space it will be compacted into a handful of large pages.

Single address space OS's in a broader sense have been made unviable by Spectre vulnerabilities; these days you need an address space flush at every transition from a "more private" to a "less private" information domain, which is basically every context switch when applications can't trust one another and have mutually private information that they're trying not to disclose to other apps.


> Single address space OS's in a broader sense have been made unviable by Spectre vulnerabilities;

This is valid only for current CPUs and probably can be fixed. For example, current CPUs do not do speculative accesses for memory-mapped IO, and it means that similar method could be used to prevent access to kernel or other processes memory. There could be registers that contain lower and upper bounds of accessible memory, and they could be checked before accessing it.


MPKs fix that. They can be switched more or less instantly because they don't require a TLB flush, and (recent) Intel CPUs don't speculate through MPK boundary violations.


QNX did it right. So far, it doesn’t seem like anyone else did.


INTEGRITY OS, and several other RTOS in embedded space.


What does Tesla use? (searching the web doesn't seem to yield a consistent answer)

Most of this thread has been focused on how Fuchsia will supplant Linux on Nest, Android and ChromeOS devices. However I immediately think about it will be used for Automobiles where RTOSes and prioritizing/scheduling operations deterministically is critical.

I'm sure Fuchsia could be leveraged for autonomous robots too, but since they gave up Boston Dynamics, not so sure anymore. Perhaps they extracted what they needed from BD before selling them off. Again I wonder what OS Tesla uses for their robots, both Grohmann and Optimus.

I have for many years thought that there was or so will be a battle in the RTOS space where Blackberry (QNX) would find themselves in a very similar situation they were in back when the iPhone and Android came onto the scene. (I know they were very different departments. Don't even know if they owned QNX at the time. Don't even know if you could consider them the same company anymore.)

There is obviously a paradigm shift taking place in the automotive industry where the incumbent/ICE manufacturers have to make a choice of continuing down the path to oblivion or pivot to compete with the likes of Tesla, NIO, Polestar, etc... The RTOS will play an important part in the transition, and Google knows this.

Google wants to be part of the stack and I believe they believe Fuchsia is the answer. Actually, I'd go so far as to say they want to own (or be the linchpin) the stack and again Fuchsia combined with Android Automobile (not Android Auto) is their attempt to move in while the incumbents are preoccupied with the other difficulties involved in this transition.

If I was QNX/BB, I would be very concerned. Imagine the world of Automobiles resembling that of the Smartphone... You basically have two choices; Apple or a myriad of others all running their own flavor of Android... think Tesla or all the others running Android Automotive (Fuchsia).

https://en.wikipedia.org/wiki/Android_Automotive


No idea what Tesla does, the rest of the car indutry cares about stuff Tesla apperently does not, given the public reports on lack of software quality.

NVidia uses a mix of QNX (for production) and GNU/Linux (for development) for their vehicles software.

https://developer.nvidia.com/drive/driveos

Note the safety critical remark for QNX, when human lifes are at stake, one gets to use OSes where safety matters as first priority.


Tesla uses an Ubuntu Linux. QNX is probably obscolete vs Linux RTOS


Tell that to anyone doing high integrity computing certification with included liability clauses.

Who will show in court from Linux RTOS contributors?


Tesla has liability clauses about the life of people and if they trust linux then any other serious project can do so. BTW why just the kernel? It's ad-hoc. If your critical software project crash because of any open source user space library such as e.g .NET core, it's no different.


It shows given the sad stories regarding their auto pilot "quality".


QNX is a living proof of viability of the microkernel architecture in industrial setting.


I'm surprised no one has made a QNX compatible/clone? It's reputed to be a very small OS, or at least a small kernel.


I don't really know anything about QNX (like I said, not an expert) so I'm curious: just how general purpose is it? Has it made design decisions based on it's targeted use that are problematic for, say, running untrusted code in a sandbox?


Many car infotainment systems are either QNX or hypervised on top of QNX.

https://blackberry.qnx.com/en/industries/connected-autonomou...


My experience with QNX is quite limited, but the OS is very general. There is a desktop GUI that is powerful enough to serve as part of self hosted software development environment. Generally, the tradeoffs are made to keep the core microkernel small and to give the system reliable guaranteed response times rather than raw performance because it is intended for hard real-time environments.


Microkernels are also not very useful on hardware without IOMMU's or separate buses isolating external components (like most SoC hardware), because a rouge-acting driver can take over your system anyway, so it has to be part of your trusted base. You can make such drivers into separate kernel-side threads or tasks, as a matter of pure convenience, but implementing them in userspace adds no security value.


While it's true that the attack vector you describes exist, assuming you can trust both the driver and hardware, it is hard to exploit as most software cannot talk directly to the driver. Even if it does, manipulating it the way you describe is challenging. By comparison, it's not only easy to talk with most drivers in a monolithic kernel, there is also many more ways to manipulate it which will result in the entire system being owned. Security isn't all or nothing if designed robustly. I hope the case for IOMMUs becomes stronger due to increased popularity of Operating Systems with userspace drivers. The BOM cost is simply worth it.


Customers of QNX, vxWorks and INTEGRITY OS beg to differ.


QNX works on real-time microcontrollers, where you just don't have the kind of hardware that can subvert an entire system by doing rouge DMA transfers. General-purpose systems are very different.


QNX works in whatever CPU one puts it on, plenty of choice, including plain Intel ones.


QNX is for "big embedded", not microcontrollers - it's plausible to have as deployment a POWER9 cpu. So is VxWorks.


>> context switching between user and kernel space.

The amount of context-switching can be reduced these days by devoting cores to tasks/processes instead of tasks on a single core.

Its not a free ride however as you pay with cache efficiency. For some systems, however I think it would pay off, the efficiency would increase with the number of cores until you get to 1 core per userspace task. At that point, its just IPC cost.


I think the point of Fuchsia to Google is it’s not Linux, nor bound by GPL. If it works it’s end of FOSS.


That seems an odd statement to make about an operating system that is fully open source.


Why do you say it will be the end of FOSS? Fuchsia is still released under MIT license at present. If it is successful, another version could be released under GPL and built up as a competitor to the non GPL version.


Billions doesn't sound right, are there any expenses other than salaries?


Some stats about the various ways of writing the project name (taken from these comments):

Fuchsia: 39

Fuschia: 19

Fushia: 1

Fuchia: 1

Other: ?

...so we can conclude that the UX of this codename is terrible. Maybe they should change it to something more easy to remember, e.g. Fuxia (Fucksya would probably also be easier to remember, but could be deemed too obscene)?


My conclusion from reading this is that Americans can't spell.


Then you can imagine how hard the name is for the other 7.5 billion people.


Is it? "Fuchsia" and derivatives are the names of a colour in romance languages - that covers a big chunk of that.


The colour is named after a plant, which looks down is named after a German chap called Fuchs.


I am Romanian and I have no trouble spelling and writing "Fuchsia".


It's still a situation with no clear solution.

Either it's a common word that is just hard to spell, and sticking to actual spelling is the saving grace as people who didn't know the spell will have a fighting chance to remember it.

Either they come up with an easier to spell version (e.g. "fuxia", but then those who knew the right spell will have to remember both, and those who "misspelled" the flower will be thinking Google's project name is the "correct" spelling.


There's still plenty of uncommon words that are easy to spell. "Anodyne" is a good example, it even sounds cool, but sadly means something not so cool.


> My conclusion from reading this is that Americans can't spell.

My conclusion from reading English all my life is that it’s a terrible unintuitive language and using the Latin alphabet for so many different-sounding languages wasn’t such a great idea.

Much like C++; taking the worst of all worlds.


Fuchsia is named after a German.


Sometimes discussions creep towards a superset of the context.


TL;DR: 'fuschia' actually works better with English spelling than 'fuchsia'.

English actually has a set of spelling rules that make it generally possible to predict the pronunciation of most words, while fuchsia is one of the words that sits well outside the spelling rules. The most notorious bits of spelling come when, as here, insistence is made on preserving the spelling despite sound changes making it completely untenable.

The original pronunciation of fuchsia in the German should be something along the lines of "fook-see-a". In English, the 'sia' would naturally want to affricate in that position (the same process that makes -tion pronounced 'shun'), which would lead to "fook-sha". I guess somewhere along the line, the k phoneme dropped, but the u also changes into an English long u (as in, it becomes the 'u' in 'cute' or 'cuticle').

The end result is that to spell the English pronunciation correctly, you need 'f', long 'u', something to spell 'sh' phoneme, vowel to make 'u' long (usually 'e', but 'i' can do the job in a pinch), and then 'a'. 'Fuschia' would be a spelling that comports with the expected pronunciation--while 'sch' is not the most common way of spelling 'sh', it is a way of doing so. And if you have only a vague of memory of what the word should look like, it has all of the requisite letters, and it yields the expected pronunciation--where the 'correct' spelling doesn't.

I'm generally a pretty good speller in English, and I will freely admit that the only way I can remember how to spell fuchsia correctly is that it's the spelling that fucks up--replace k with h.


Naming it "Fucksya" wouldn't be the best advertisement either


They’re being fearless about vendor lock-in!


The more popular it becomes, the more people will learn to correctly spell fooshia.


...I honestly propose a new "F*ks Yeah!" spelling for this project just to mindfuck with future programmer archaeologists


The name is perfectly fine from the perspective of a german-native speaker, just like the color Fuchsia.


...and no comments in here with either pink or Taligent?


To be fair, I didn't think anyone would be able to spell 'kubernetes' correctly. But life found a way - thank goodness for numeronyms!


A plant family named after a person called Fuchs.


A plant family named after a person called Fucks.


The Fuchs surname means "fox," from the Middle High German vuhs, meaning "fox."

Google makes a foxy OS. Shame it's not a fire-fuchs


In italian the color name is fuxia


The color's name is Fucsia, in Italian. Fuxia does not exist https://linguaegrammatica.com/fucsia-o-fuxia-come-scrivere


I think for a part you are correct, and for a part you are not correct, but if fuxia exist as a way to refer to the color in many places and/or instances, then Fuxia exists as a way to refer to the color and the fact that it does not exist is just theoretical https://www.google.com/search?q=fuxia&tbm=isch


True, but then again Italian has regular pronunciation rules and "x" is always pronounced the same way as "cs", so even if "fuxia" is not a proper spelling it's a homophone and I wouldn't be surprised to see somebody make that spelling mistake.


Not to mention the Fuchsia logo looks very much like the Fedora Linux logo.


Fewsha?


A useful mnemonic may be “Fuck Sia”, at least that was how I broke it down.


Watch you mouth young fellow!!! ;)

https://sia.tech


In case you need some context on this project, from: https://en.wikipedia.org/wiki/Fuchsia_(operating_system)

Fuchsia is an open-source capability-based operating system developed by Google. In contrast to prior Google-developed operating systems such as Chrome OS and Android, which are based on the Linux kernel, Fuchsia is based on a new kernel named Zircon. It first became known to the public when the project appeared on a self-hosted git repository in August 2016 without any official announcement. After years of development, Fuchsia was officially released to the public on the first-generation Google Nest Hub, replacing its original Cast OS.


Meta note about this site: if you disable web fonts it looks almost like Zalgo text. I wish people would stop using special fonts for icons. I disable downloadable fonts in Firefox* because the janky repaints and slower page loads aren't worth the pixel-perfect text, but unfortunately it breaks a lot of sites that misuse fonts for icons.

*gfx.downloadable_fonts.enabled


Honestly, what are people's thoughts on automatic updates being built into the OS? They can be good for patching security vulnerabilities, but automatic updates are also incredibly unpopular (see Windows).

I personally wouldn't want an OS that updates according to the developers wishes, with no control on my end. It reeks too much of corporate control, from simple things like changing UI or functionality to much bigger things like removing features because they are no longer in the interest of the company.


> If interested, you can configure your Workstation to receive automatic updates.

If it stays here, I'm 100% on board; I wouldn't even mind defaulting to on. Forced automatic updates (hi, Microsoft) are a terrible move. I would also say that separating bug/security fixes from features and breaking changes helps, if my OS vendor had a way to auto-patch only security issues, and could ensure zero breakage as a result - no new features, no changing UX, nothing but invisible security patches - I'd enable it in a heartbeat. And they won't, so I won't:)


it sounds like you're looking for debian stable ;)


It makes sense if Fuchsia's main use cases are IoT devices, given how common public-facing vulnerable IoT devices made by defunct companies there are. It also makes sense if Fuchsia eventually replaces Linux as the Android kernel, given the historical short EoLs and lack of security updates (see: https://androidvulnerabilities.org).

For a development machine, I'd honestly... be pretty fine with it. That's similar to how Arch Linux operates (but change hourly to daily or weekly) and it causes very few problems. I think I've had maybe two or three big issues caused by updates in the past year, plus a couple application-specific issues (not kernel-level or os-level). With a more extensive regression testing team I can see Fuchsia following through with the promise of seamless updates.

Windows updates also have possibly the worst reputation you can get (except, maybe, iOS update's reputation when Apple was slowing down older devices), so almost anything Google does will be better. Not needing to reboot (presumably, if they're checking every hour) after an update will also help.


Windows updates are unpopular due to their implementation. Among other issues: they require a restart, they run complicated transactional flow rather than snapshots + patching the files, they kill the desktop state, they require extra processing on the next startup right as you want to actually do work, the whole delivery system is super complicated to suit enterprises, and yet it is still not able to handle user apps in the same flow.

If they implemented a proper A/B booting with preserving the intent/state of open apps most people wouldn't even realise an update happened and wouldn't complain.


I find them awful.

Even ignoring the philosophical "it's my machine dammit, it'll update when I damn well want it to and not before" POV[0] it is a practical mess on[1] two fronts:

1. I use my windows machine for computation, ergo it needs to be running, and it costs me money and productivity when it is not. If it restarts to update, I lose hours at minimum of cpu-time and more until I notice and can get things restarted.

2. It makes what should be core user functionality useless. I would like to set up my desktop, with some programs open, a few spreadsheets mayhaps, browser to discord, reddit, HN, etc. Maybe a second desktop relegated to some other task. I can't do this, because windows will force a restart in the next 144 hours which will invariably fubar anything I've tried to set up.

[0]To which I strongly adhere.

[1]At least.


It looks like everything runs decoupled in user-space, even the filesystem, drivers and packages. Meaning that updates to the system happens per part and requires no re-boots when updating (not even file-system updates, according to docs). So the updates are independent, the packages have their own life-cycles and works independent of the underlying system.

Having an auto-updating, fully sandboxed OS, without reboots, really doesn't sound that bad.

Especially if it's only kernel, stability and security updates.


Running in user space doesn't automatically decouple things; dbus is a user space process, but restarting it is traumatic because half the system uses it.


That's because dbus hasn't been written with no-downtime restarts in mind, but it's not that challenging to implement, especially when client apps are interfacing through (UNIX) sockets.

It's not dissimilar to writing zero-downtime web services.


Sure, but that's not a function of user/kernel space; Linux has live patching and dbus doesn't. It would be cool if fuchsia makes every OS component support non-disruptive updates, I'm just trying to point out that that doesn't automatically follow from running parts in user space.


Good point, updated my reply. While Google don't explicitly say that they want to support run-time updates of the kernel and OS; it's implied in their system architecture that such is the case.


Very difficult to do that though without enormous effort in transparent state persistence across restarts. Restarting even something with a very simple non-transactional API and state base like a filing system requires persisting all the file handle state, and that's the best case. Restarting a UI subsystem is much harder still because the apps may have a lot of uploaded state, things may need to be changed transactionally, etc.


Web apps update every time you open them, so it seems we won’t be able to escape this in the long run.


This and the fact that they (usually) install in under a second is the entire explanation for the success of the Web, in my opinion.


Windows updates are mostly unpopular because they have a tendency to happen when you are in the middle of something else which is incredibly annoying.


The problem is that automatic updates are both feature updates and security updates.

I doubt many people will have a problem with security updates that have no impact on the feature space.

But feature updates tend to break UX flows and not many people are happy about that.Your comment about corporate control also applies to feature updates.


I can imagine eventual (de-googled?) variants which will update only from a local server. In principle, at that local server level one might manually update repositories and control the version(s) presented to Fuchia clients.


I guess it depends on how it’s handled. If their success rate is as poor as windows updates then obviously I wouldn’t touch them with a ten foot pole for anything critical. But if the updates go as smoothly as iOS (or windows defender/macOS security) updates tend to, then I think the pros outweigh the cons for the majority of individuals for any devices that are regularly on the internet.


Please. Help us small business. Extend cycles to have a better coverage of security issues, and less feature changes, in updates.


I would imagine any consumer product running it would work like ChromeOS with a second system library that gets updated in the background and then swapped in at boot. Windows update is hated for how bad the implementation is..but people also don't like having out of date software and features.


fuchsia has an interesstin new approach for the handling of updates (should be more stable). We will see how this plays out, when it is no longer a developer playground with nightly builds.


Fuchsia, like Waymo, is never going to take off. It's been half a decade and they still haven't shipped.


They've shipped it on some Nest Hubs already. In fact, it's pretty nifty that their update system allows them to replace the OS completely from a Linux-based OS to A Fuchsia OS.


That's...not terribly long for a new OS intended for consumer products? This isn't like ChromeOS or Android where it's building on top of Linux, it's practically all-new, at least as I understand it.

And they actually did ship on one class of Nest devices recently. It's a limited use case, but that's expected for the first product release.


5 years is fine for such a project.

It will 'take off' if it finds a practical use case.

If they start making Androids based on this, it will 'take off'.

If Tesla, Ford, Volskwagen use it in their cars, it will 'take off'.

etc..

I don't have nearly enough insight to know one way or another, but G is a big company and if they want to do something they will.

My point is, the 5 years and no traction isn't necessarily indicative of that much.


> Fuchsia, like Waymo, is never going to take off

It is already running on a Nest Hub today. So it is already shipped.

Given that they have Chrome [0] already running on it, I think you know what Google will also be replacing.

Fuchsia seems that it will go beyond running on Nest Hubs in the future.

[0] https://9to5google.com/2022/03/04/full-google-chrome-browser...


> I think you know what Google will also be replacing.

Are you talking about Chrome OS?


   yes.


What makes you think Waymo won't 'take off'? It seems to be doing fine in its test markets.


They just need to put quadcopter blades on the cars. Then they will take off and even fly themselves to destination. Sounds pretty neat, doesn't it.


Hope you are right, but half a decade doesn't seem long for an OS?


Android took half a decade to ship anything.


Nest Hubs run Fuschia.


[flagged]


I'm not sure what you're trying to say? Who are "these people"? The people who write Fuchsia's documentation? And they can't stop what? Adding DEI messages to the header of their website?


>Who are "these people"?

Various Google people. See also the Go doc pages in the recent past.

>And they can't stop what?

Displaying intrusive ideological messaging on technical websites, where it does not belong.


> intrusive ideological messaging

Would "Martin Luther King Day" also be intrusive ideological messaging? What about "Christopher Columbus Day"? I'm interested on where the line is here for you.


[flagged]


All this info is available in the "getting started" section.

   For GPU support, get a NUC7 (Kaby Lake) or NUC8 (Coffee Lake), or a higher generation.

Building the NUC target now as I have a supported model on hand. Excited to try this :)


> But it seems that I expect too much

This is not a consumer product, so probably.


the shell is limited and the environment can't really host a full dev stack. I continue to fail to see the point of this project that eats other projects.


I'd really rather use something that respects my privacy.


I have my hands full already. Not sure why would I want to play with unfinished product and use yet another proprietary language for the sake of Google getting more and more into my life. I think I'll pass




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: