This is a project which builds firmware for multiple devices and processor architectures, supporting developers running various operating systems locally, and includes support for documentation generation and firmware localization. It doesn't sound too strange to me that such a project includes a decent amount of tooling to ensure that compiling things is accessible for a layperson, and ensure a healthy influx of community and corporate contributions which don't diminish software quality. None of the dependencies mentioned in the Dockerfile [1] really seem out of place, to me, and I don't think the documentation generation or checkstyle packages are critical to compiling the firmware.
Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as - as far as I can tell - none of the software relies on it being run inside a Docker container.
> This is a project which builds firmware for multiple devices and processor architectures, supporting developers running various operating systems locally, and includes support for documentation generation and firmware localization.
exactly the source of the issue. The scope of the project is just preposterous for what it is. I'm not sure what the proportion between boilerplate and actual useful functionality would is, but from the little that I saw it is outrageous.
> Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as
I heavily disagree with this assumption and the rest of the assumptions related to the stability of the dependencies.
The scope of the project is for the project to decide. It seems that 168 contributors and thousands of users seem to disagree with you here.
I'm not really sure what fewer dependencies you are used to other than a compiler, make, a scripting environment to orchestrate things (bash) and some (other) scripting environment to cook assets (Python3). I suppose that that last bit is something you're not used to in a more embedded world, but in the world of user-facing tools with UIs, it's really not uncommon at all to have a dependency on some font library or internationalization library so that you can generate an image or display some text. The latter of which is presumably fairly important, given that the users and hardware manufacturers that this project supports aren't based out of locations where English is the native language. I'm not sure localization can be pulled out of scope, because of that.
> > Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as
> I heavily disagree with this assumption and the rest of the assumptions related to the stability of the dependencies.
Docker images quite literally contain an entire (userspace) root filesystem. As long as you have an existing Linux installation on an x86 processor, a Linux kernel that didn't cause any breaking changes compared with the one that was used when the image was built, and some way of extracting a gzipped tarball, you can take the image that you previously built 12 years ago, extract its contents, and run all of the tools (gcc, Python3, make, bash) embedded within outside of Docker, without any dependency issues because all of the dependent libraries can be found within the image already (if they weren't in there, the project's CI builds would not work at all!).
You can verify this quite easily: install Docker, then run `docker pull ubuntu:latest; docker save --output test.tar.gz ubuntu:latest`.
I'll agree with you that if you are a user that stumbles upon this project 12 years from now (assuming they ceased development today) there are likely some challenges as you'll have to source the dependencies from somewhere and the repository URLs used today may no longer be available by then (most projects probably suffer from this). But if today someone builds the IronOS development image from the Dockerfile and saves it, I really don't know what'll have to happen for it to be impossible to get the compiler and other tools contained within to run on supported hardware in 12 years.
EDIT: Imagine what the project owners would have to do to do the same things they're doing now (building documentation, cooking required assets) but without relying on third party tools or programming languages other than C. They'd have to spend time writing font parsers, documentation generators, build scripting tooling, and much more! In a sibling comment you mentioned that "it seems to me that at some point this industry stopped trying to solve real issues", but I'd argue that the project pulling in the scope of building those tools to avoid dependency issues is exactly that: them solving issues that is not within their scope or merit to solve.
I'm asking you what do you think about it, not what the devs think about it. Let me phrase it differently, there are a total of 16 languages and 276969 lines of code in the repo for a "soldering iron firmware"
I'm not sure where you get that language statistic from. I'm going to assume whatever tool you're using thinks that "JSON" and "YAML" are languages, to which I respond that they're not or at least not as significantly so as a programming language with its own paradigms, libs and tools. The repo in question is mostly C/C++ with a relatively small amount of other stuff to provide tooling support, and the languages used for that are really not all that problematic, difficult to understand, or unsupported.
As far as LOC goes, I know well enough that it's a meaningless statistic that has very little practical use. I've written 34 lines of JavaScript that were as meaningful as 25k lines of C, but those lines of JS were obviously interpreted on an engine that's millions of LOC.
Besides, you can always treat a built Docker image as a stable toolchain archive if that's a concern; there's little reason to assume that it won't work 12 years into the future as - as far as I can tell - none of the software relies on it being run inside a Docker container.
[1] https://github.com/Ralim/IronOS/blob/80c4b58976268849b6d1c8d...