Hacker News new | past | comments | ask | show | jobs | submit | eb0la's comments login

IMHO Cook is following good development practices.

You need to know what you support. If you are going to change, it must be planned somehow.

I find Torwalds reckless by changing his development environment before release. If he really needs that computer to release the kernel, it must be stable one. Even better: it should be a VM (hosted somewhere) or part of a CI-CD pipeline.


The real problem here was "-Werror", dogmatically fixing warnings, and using the position of privilege to push in last-minute commits without review.

Compilers will be updated, they will have new warnings, this has happened numerous times and will happen in the future. The linux kernel has always supported a wide range of compiler versions, from the very latest to 5+ years old.

I've ranted about "-Werror" in the past, but to try to keep it concise: it breaks builds that would and should otherwise work. It breaks older code with newer compiler and different-platform compiler. This is bad because then you can't, say, use the exact code specified/intended without modifications, or you can't test and compare different versions or different toolchains, etc. A good developer will absolutely not tolerate a deluge of warnings all the time, they will decide to fix the warnings to get a clean build, over a reasonable time with well-considered changes, rather than be forced to fix them immediately with brash disruptive code changes. And this is a perfect example why. New compiler fine, new warnings fine. Warnings are a useful feature, distinct from errors. "-Werror" is the real error.


With or without -Werror, you need your builds to be clean with the project's chosen compilers.

Linux decided, on a whim, that a pre-release of GCC 15 ought to suddenly be a compiler that the Linux project officially uses, and threw in some last-minute commits straight to main, which is insane. But even without -Werror, when the project decides to upgrade compiler versions, warnings must be silenced, either through disabling new warnings or through changing the source code. Warnings have value, and they only have value if they're not routinely ignored.

For the record, I agree that -Werror sucks. It's nice in CI, but it's terrible to have it enabled by default, as it means that your contributors will have their build broken just because they used a different compiler version than the ones which the project has decided to officially adopt. But I don't think it's the problem here. The problem here is Linus's sudden decision to upgrade to a pre-release version of GCC which has new warnings and commit "fixes" straight to main.


Sadly, I lost that battle with Torvalds. You can see me make some of those points on LKML.

I see, thanks. ( Found it here: https://lkml.org/lkml/2021/9/7/716 )

This is my take-away as well. Many projects let warnings fester until they hit a volume where critical warnings are missed amidst all the noise. That isn't ideal, but seems to be the norm in many spaces (for instance the nodejs world where it's just pages and pages of warnings and deprecations and critical vulnerabilities and...).

But pushing breaking changes just to suppress some new warning should not be the alternative. Working to minimize warnings in a pragmatic way seems more tenable.


He releases rc every single week (ok, except before rc1 there's two weeks for merge window), there's no "off" time to upgrade anywhere.

Not that I approve the untested changes, I'd have used a different gcc temporarily (container or whatever), but, yeah, well...


I find it surprising that linus bases his development and release tools based on whatever's in the repositories at that time. Surely it is best practice to pin to a specified, fixed version and upgrade as necessary, so everyone is working with the same tools?

This is common best practice in many environments...

Linus surely knows this, but here he's just being hard headed.


People downloading and compiling the kernel will not be using a fixed version of GCC.

Why not specify one?

That can work, but it can also bring quite a few issues. Mozilla effectively does this; their build process downloads the build toolchain, including a specific clang version, during bootstrap, i.e., setting up the build environment.

This is super nice in theory, but it gets murky if you veer off the "I'm building current mainline Firefox path". For example, I'm a maintainer of a Firefox fork that often lags a few versions behind. It has substantial changes, and we are only two guys doing the major work, so keeping up with current changes is not feasible. However, this is a research/security testing-focused project, so this is generally okay.

However, coming back to the build issue, apparently, it's costly to host all those buildchain archives. So they get frequently deleted from the remote repository, which leads to the build only working on machines that downloaded the toolchain earlier (i.e., not Github action runner, for example).

Given that there are many more downstream users of effectively a ton of kernel versions, this quickly gets fairly expensive and takes up a ton of effort unless you pin it to some old version and rarely change it.

So, as someone wanting to mess around with open source projects, their supporting more than 1 specific compiler version is actually quite nice.


Conceptually it's no different than any other build dependency. It is not expensive to host many versions. $1 is enough to store over 1000 compiler versions which would be overkill for the needs of the kernel.

What would that help? People use the compilers in their distros, regardless of what's documented as a supported version in some readme.

Because then, if something that is expected to compile doesn't compile correctly, you know that you should check your compiler version. It is the exact same reason why you don't just specify which library your project depends on but also the libraries' version.

People are usually going to go through `make`, I don't see a reason that couldn't be instrumented to (by default) acquire an upstream GCC vs whatever forked garbage ends up in $PATH

This would result in many more disasters as system GCC and kernel GCC would quickly be out of sync causing all sorts of "unexpected fun".

Why would it go wrong, the ABI is stable and independent of compiler? You would hit issues with C++ but not C. I have certainly built kernels using different versions of GCC than what /lib stuff is compiled with, without issue.

You'd think that, but in effect kconfig/kbuild has many cases where they say "if the compiler supports flag X, use it" where X implies an ABI break. Per task stack protectors comes to mind.

Ah that's interesting, thanks

I'm completely unsure whether to respond "it was stable, he was running a release version of Fedora" or "there's no such thing as stable under Linux".

The insanity is that the Kernel, Fedora and GCC are so badly coordinated that the beta of the compiler breaks the Kernel build (this is not a beta, this is a pre-alpha in a reasonable universe...is the Kernel a critical user of GCC? Apparently not), and a major distro packages that beta version of the compiler.

To borrow a phrase from Reddit: "everybody sucks here" (even Cook, who looks the best of everyone here, seems either oblivious or defeated about how clownshoes it is that released versions of major linux distros can't build the Kernel. The solution of "don't update to release versions" is crap).

(Writing this from a Linux machine, which I will continue using, but also sort of despise).


Just write what you know: they won't be able to know where were you...

... but they will be able to know if you're lying to them because they are trained to read people.


Makes sense. I’d want it to be as accurate as possible but it’s not going to be perfect. Figure I can extract most flight info out of gmail if I filter a data export by airlines which should get me 90% of the way


Not just Microsoft. I remember iPlanet directory server... which got renamed to Sun Directory Server, SUN Java System Directory Server, Sun Directory Server Enterprise Edition, Java One Directory Server, Oracle Java Directory Server...

I believe Oracle called everything back to iPlanet because customers never used the new names.


I believe when we talk about this there's a big misunderstanding between Copyright, Trademarks, and Fair use.

Indy, with its logo, whiplash, and hat, is a trademark from Disney. I don't know the specific stuff; but if you sell a t-shirt with Indiana Jones, or you put the logo there... you might be sued due to trademark violation.

If you make copies of anything developed, sold, or licensed by Disney (movies, comics, books, etc) you'll have a copyright violation.

The issue we have with AI and LLM is that: - The models compress information and can make a lot of copies of it very cheaply. - Artist wages are quite low. Higher that what you'd pay OpenAI, but not enough to make a living even unless you're hired by a big company (like Marvel or DC) and they give you regular work ($100-120 for a cover, $50-80/page interior work. One page needs about one day to draw.) - AI used a lot of images from the internet to train models. Most of them were pirated. - And, of course, it is replacing low-paying jobs for artist.

Also, do not forget it might make verbatim copies of copyrighted art if the model just memorized the picture / text.


If you're paying for tokens used to generate that...


Then the service that sold you tokens and delivered copyright infringing content is violating the law


My father was born in a small village in Guadalajara, Spain. I remember in the village my grandma and other neighbours tool their chair outside their homes to talk at the end of the day. It is great to see good things coming again. Do it... MORE.


If that happens, count with me to use Anubis to factor large primes or whatever science needs as a background task.


Actually, that is not a bad idea. @xena maybe Anubis v2 could make the client participate in some sort of SETI@HOME project, creating the biggest distributed cluster ever created :-D


Oh come now, clearly Anubis should make the clients mine bitcoin as proof of work, with a split for the website and the author.

Oh dear, somebody is going to implement this in about an hour, aren't they....


Just in case you didn't know, cryptominers in Javascript are already thing. Firefox even blocks them.


a service that allows you expose and host your data in a private manner getting a cut from whatever token your endpoints have generated.


Shouldn't you factor composite numbers? Factoring prime numbers is pointless.


First time you go to London, you visit everything else; but Greenwich is great. Probably because is far from city center, but totally worth the visit. I guess next time I travel to London I'll stay there.


The fast commuter boat from Greenwich to Westminster Pier is my favourite was to get into town


If visit Greenwich Observatory you'll see Harrison sea clocks H1, H2, and H3.

I had the luck (and honor!) to meet a volunteer guide - Stephen - that knew everything about clocks and explained it perfectly.

When you meet him, say hello :-). My kid barely spoke English and Stephen had the patience and virtue to answer everything with a smile.

Best guide I ever met.


It is a wonderful place. The walk up the hill at my advance age was a challenge, but well worth it!


Excalidraw and PlantUML Each one has its own benefits / drawbacks. PlantUML is great for version-controlled files. Excalidraw for throwaway diagrams and discussion. Ocassionally I draw comic strips with Excalidraw.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: