Hacker Newsnew | past | comments | ask | show | jobs | submit | dingi's commentslogin

Vanilla Gnome user here. Gnome may look like it was designed for tablets but it has a keyboard shortcut for basically anything. So you don't do much of point and clicks if you know Gnome. You can but you don't have to. It just gets out of your way as they say.

What's the point? They are e-waste as soon as they leave the factory.

Isn't strlcpy the safer solution these days?


I don't think anybody in this thread read the article.

Strlcpy tries to improve the situation but still has problems. As the article points out it is almost never desirable to truncate a string passed into strXcpy, yet that is what all of those functions do. Even worse, they attempt to run to the end of the string regardless of the size parameter so they don't even necessarily save you from the unterminated string case. They also do loads of unnecessary work, especially if your source string is very long (like a mmaped text file).

Strncpy got this behavior because it was trying to implement the dubious truncation feature and needed to tell the programmer where their data was truncated. Strlcpy adopted the same behavior because it was trying to be a drop in replacement. But it was a dumb idea from the start and it causes a lot of pain unnecessarily.

The crazy thing is that strcpy has the best interface, but of course it's only useful in cases where you have externally verified that the copy is safe before you call it, and as the article points out if you know this then you can just use memcpy instead.

As you ponder the situation you inevitably come to the conclusion that it would have been better if strings brought along their own length parameter instead of relying on a terminator, but then you realize that in order to support editing of the string as well as passing substrings you'll need to have some struct that has the base pointer, length, and possibly a substring offset and length and you've just re-invented slices. It's also clear why a system like this was not invented for the original C that was developed on PDP machines with just a few hundred KB of RAM.

Is it really too late for the C committee to not develop a modern string library that ships with base C26 or C27? I get that they really hate adding features, but C strings have been a problem for over 50 years now, and I'm not advocating for the old strings to be removed or even deprecated at this time. Just that a modern replacement be available and to encourage people to use them for new code.


Do they really need to at this point? Just include bstrlib and stop thinking about it?


Having an official replacement is the only thing that I think will motivate the majority C programmers to finally switch.


> Is it really too late for the C committee to not develop a modern string library that ships with base C26 or C27? I get that they really hate adding features, but C strings have been a problem for over 50 years now, and I'm not advocating for the old strings to be removed or even deprecated at this time. Just that a modern replacement be available and to encourage people to use them for new code.

The next version of C (C2y) is expected to be C29, not C26 or C27. And work has been done on a new string library: see, e.g. https://www.open-std.org/jtc1/sc22/wg14/www/docs/n3306.pdf (not the only proposal!). That said, I would be surprised if anything gets merged into the standard in less than a decade, simply because the committee is not organizationally set up for major library overhauls like this.


A lot of OSS burnout comes from a broken assumption: that publishing code creates an obligation.

Historically, open source meant "here's code, use it if it helps, fix it if it breaks." No support contracts, no timelines, no moral duty. GitHub-era norms quietly inverted that into unpaid service work, with entitlement enforced socially ("be nice", "maintainers owe users").

Intrinsic motivation is the only sustainable fuel here. Once you start optimizing for users, stars, adoption, or goodwill, pressure accumulates and burnout is inevitable. When you build purely because the work itself is satisfying, stopping is always allowed, and that's what keeps projects healthy.

Hard boundaries aren't hostility; they're corrective. Fewer projects would exist if more maintainers adopted them, but the ones that remain would be stronger, and companies would be forced to fund or own their forks honestly.

Open source doesn't need more friendliness. It needs less obligation


I have a lot of nostalgia for Canonical. I still remember the excitement of receiving those free "ShipIt" CDs in the mail; Ubuntu 8.04 was my gateway drug into the Linux ecosystem, and I'll always be thankful to them for making Linux feel accessible back then.

That said, I find myself increasingly at odds with the direction they're taking. The whole Snap vs. Flatpak debacle is exhausting, and personally, I'm not a fan of either. I'd take a standard apt repo over containerized desktop apps any day. Seeing core applications migrate to Snaps and the recent decision to move coreutils to alternate implementations feels like a bridge too far for my taste.

There's also the creeping Proprietary integrations to consider. To be honest, this is more of a philosophical stance than a practical one. Ubuntu is still a fantastic "get work done" distro, and I still use it on my office laptop because it just works and it's the only destro that got my employer's stamp of approval.

But for my personal setup? I've moved on. It's Arch for the desktop and Debian for servers. Nothing else really hits that sweet spot of control and simplicity for me anymore.


> Seeing core applications migrate to Snaps and the recent decision to move coreutils to alternate implementations

It's a classical embrace and extinguish strategy.


Why bother with these obscure boards with spotty software support when you can get a better deal all around with an x86 mini PC with a N150 CPU?


Exactly! Just grab a mini PC such an Optiplex, it will be so much better.


I bought a used thinkcentre tiny off of eBay. It's got great GPU decoding/encoding support, great driver support for all the peripherals, and can boot from an m2 NVME or a SATA drive, it has a very normal bootloader. It can even run Windows. The mostly-aluminum enclosure is very nice and well-engineered, it's easy to pop open, it has a power button. It was under $100 and makes me question why any hobbyist bothers with SBCs like these.


Precisely. The 'hustle' culture and the fetishization of hyper-efficiency act as a catalyst for a wide range of systemic societal problems. I'm glad that I'm not part of that sphere.


I don't really need one either, but I'll buy it anyway, mostly because I want to support vendors who make hacker-friendly hardware and software. For that reason alone, Valve gets my money.


That's awesome! I've come to take "the year of the Linux desktop" as a prophecy of sorts. It might take another 20 years, or desktops might vanish, but it is going to happen. Slow and steady wins the race. Best regards from a decade-plus Linux full-timer!


Honestly, Assembly is great. It's the most closer-to-the-metal, no-nonsense, raw experience you can get. The problem is that means it's also tedious and error-prone to write, but the elegant simplicity of the abstraction is still there.


Macros :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: