This is becoming such a tiresome opinion. How are concepts fixing a problem created by previous features to the langue? What about ranges? Auto? Move semantics? Coroutines? Constexpr? Consteval? It is time for this narrative to stop.
Move semantics is only needed because C++ introduced implicit copies (copy constructor) and they of course fucked it up my making them non-destructive so they aren't even 'zero cost'.
Constexpr and consteval are hacks that 1) should have just been the default, and 2) shouldn't even be on the function definition, it should instead have been a keyword on the usage site: (and just use const)
int f() { ... } // any old regular function
const int x = f(); // this is always get evaluated at compile time, (or if it can't, then fail to compile)
int y = f(); // this is evaulated at runtime
That would be the sane way to do compile time functions.
I agree that I would have preferred destructive moves, but move semantics makes C++ a much richer and better language. I kinda think pre-move semantics, C++ didn't quite make "sense" as a systems programming language. Move semantics really tied the room together.
const int x = f(); // this is always get evaluated at compile time, (or if it can't, then fail to compile)
That's very silly. You're saying this should fail to compile?
void foo(int x) {
const int y = bar(x);
}
There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
So you respond "well, I didn't mean THAT kind of const, you should have a different word for compile-time constants and run-time non-mutability!" Congratulations, you just invented constexpr.
There are many bad things about C++, but constexpr ain't one of them.
>There's no way the compiler can run that, because it doesn't know what x is (indeed, it would have a different value every time you run the function with a new argument). So your proposal would ditch const completely except in the constexpr case, everything runtime would have to be mutable.
Yeah, I see no problem with that.
Non-constant expressions usage of 'const' has always just seemed like a waste of time for me, never found it useful.
But I guess a lot of people really liking typing const and "preventing themselves from accidentally mutating a variable" (when has that ever happened?), so as a compromise I guess you can have a new keyword to force constant expressions:
constexpr auto x = foo(); // always eval at compile time
const auto x = foo(); // old timey const, probably runtime but maybe got constant folded.
but it's not really a big deal what they keyword is, the main point was that "give me a constant value" should be at the usage site, not at the function definition.
> the main point was that "give me a constant value" should be at the usage site, not at the function definition.
The issue is, not everything can be done at compile time, and so “I can use this at compile time” becomes part of the signature because you want to ensure that it will continue to be able to be used that way. Without it, changes in your function could easily break your callers.
Exactly right. There's a huge benefit to encode the ability for compile-time evaluation in the signature of the function itself. Much better than doing it "ad-hoc", like how template instantiation does it. Sometimes it will work, sometimes it doesn't. constexpr functions always work.
I like const because I can look at a function signature and know that nothing downstream is mutating a parameter. I can also write a function that returns a const reference to a member and know that nobody in the future will ever break my invariants.
This isn't about "oops, I didn't mean to mutate that." This is about rapidly being able to reason about the correctness of some code that is leveraging const-qualified code.
I kinda like it on occasion. Works like pythons defaultdict. Like, if you wanna count something:
for (const auto &thing: collection) {
counts[thing]++;
}
Works nicely, you don't have to check if it's already there before ++ing it. As long as you know that's what operator[] does, it comes in handy more than I would've expected.
Yeah. It has its uses. You could accomplish the same with the rather verbose `counts.try_emplace(thing).first->second++` but nobody wants to type that (even if it is more explicit about what it's doing).
Another popular use case is something along the lines of:
That said, I don't know what behavior I'd want if maps didn't automatically insert an element when the key was absent. UB (as with vector)? Throw an exception? Report the incident to Stroustrop? All these options feel differently bad.
Maybe it's not a concern in C-family languages, but rust's culture of defaulting to let and only using mut when it's specifically required does feel very pleasant and ergonomic when I'm in that headspace.
Eh not really accurate because C's const means immutable not actually constant. So I get introducing constexpr to actually mean constant. But, yeah, constexpr x = f() should probably have worked as you described.
const is different in C++ from const in C. const variables in C++ are proper compile-time constants. In C they are not (the nearest equivalents are #define and enum values).
So in C++ "const x = EXPR" would make sense to request compile-time evaluation, but in C it wouldn't.
Ouch, but thanks. I learned something today - something I'd long forgotten. I like your example, it shows the point well. (Though, there are circumstances when a compiler can unroll such a loop and infer a compile-time constant, it wouldn't qualify as a constant expression at the language level.)
It's been so long since I used C++ for serious work that we weren't using C++11, so neither auto nor range-for were available. It would be uncommon to see "const type = " with a non-reference type and a non-constant initialiser.
Even with your example, some styles avoid "const auto item", using either "auto item" or "const auto& item" instead, because the "const" matters when taking a reference, not so much with a copy.
But I appreciate your point applies to const variables with non-constant initialisers in general, in the language.
There was once a big deal in literature about const in C++ being the "better" alternative to how #define is commonly used with C for constant values, and it seemed applicable to the thread as a key distinction between C and C++, which the parent commenter seemed to have conflated by mistake.
But I'd forgotten about const (non-reference) variables accepting non-constant initialisers, and as I hadn't used C++ seriously in a while, and the language is always changing, I checked in with a couple of C++ tutorials before writing. Unfortunately those tutorials were misleading or too simple, as both tutoruals said nothing about "const type x = " (non-reference/pointer) being uwed in any other way than for defining compile-time constants.
It's bit embarrssing, as I read other parts of the C++ standard quite often despite not using it much these days. (I'm into compiler guts, atomics, memory models, code analysis, portability issues, etc.). Yet I had forgotten this part of the language.
So, thanks for sending me down a learning & reminder rabbit-hole and correcting my error :-)
I thought the whole point of ranges is to solve problems created by iterators, move semantics to take care of scenarios where nrvo doesn't apply, constexpr and auto because we were hacking around it with macros (if you can even call it that)?
To me, redoing things that are not orthogonal implies that the older version is being fixed. Being fixed implies that it was incorrect. And to clarify, sure, auto types and constexpr are entirely new things we didn't have (auto changed meaning but yeah), but we were trying to "get something like that" using macros.
> To me, redoing things that are not orthogonal implies that the older version is being fixed
The older version is being improved, especially for ergonomics. Regarding your examples, ranges do not obsolete iterators, they are just a convenient way to pass around iterator pairs, but actual range are better implemented in terms of iterators when they are not just a composition of ranges. Similarly move semantics has little to do with nrvo (and in fact using move often is suboptimal as it inhibits nrvo).
Again, I have no idea how constexpr and auto have anything to do with macros.
auto is fixing the problem of long-ass type names for intermediaries thanks to templates and iterators.
Move is fixing the problem of unnecessary mass-construction when you pass around containers.
std::ranges was introduced because dear fucking god the syntax for iterating over a partial container. (And the endless off-by-one errors)
concepts, among other things, fix (sorta) the utter shit show that templates brought to error messages, as well as debacles like SFINAE and std::enable_if.
You're right. They're not fixing problems created by previous features. They're all fixing problems created or made massively worse by templates.
I find the concept of a context structure passed as the first parameter to all your functions with all your "globals" to be very compelling for this sort of stuff.
This is very similar to dependency injection. Separating state and construction from function or method implementation makes things a lot easier to test. In my opinion it's also easier to comprehend what the code actually does.
That just seems like globals with extra steps. Suddenly if your context structure has a weird value in it, you’ll have to check every function to see who messed it up.
First, that's true for globals as well. Second, with "context structure" pattern, the modifications to it are usually done by copying this structure, modifying some fields in the copy and passing the copy downwards, which severely limits the impact radius and simplifies tracking down which function messed it up: it's either something above you in the call stack or one of the very few (hopefully) functions that changes this context by-reference, with intent to apply such changes globally.
This plus immutable data is what makes doing web apps in Elixir using Phoenix so nice. There is a (demi-)god "%Conn" structure passed as the first parameter that middleware and controller actions can update (by returning a new struct). The %Conn structure is then used in the final step of the request cycle to return data for the request.
For non-web work genservers in Elixir have immutable state that is passed to every handler. This is "local" global state and since genservers guarantee ordering of requests via the mailbox handlers can update state also by returning a new state value and you never have race conditions.
That's exactly why I used this specific example. I seen many code bases that use clone to avoid mutation problems so I wrote this specifically to show it can become a problem too.
I wrote a better article on globals. I plan on posting it next week
This seems more an issue with not understanding structuralClone, than one of understanding globals or lack thereof. There’s nothing wrong with the example, it does exactly what the code says it should — if you want counter to be “global” then structuralClone isn’t the function you want to call. The bug isn’t in how counter was in obj, the bug is in calling structuralClone when its behaviour wasn’t wanted.
With that said, it seems obvious that if you want to globally count the calls, then that count shouldn’t live in an argument where you (the function) don’t control its lifetime or how global it actually is. Simple has no say over what object obj.counter points to, it could trivially be a value type passed into that particular call, so if you know you want a global count then of course storing it in the argument is the wrong choice.
Global has two conflated meanings: global lifetime (ie lifetime of the whole program) and global access (which the article states). Simple needs global lifetime but not global access.
You rarely ”need” global access, although for things like a logger it can be convenient. Often you do need global lifetime.
If I have 500 functions, I don't want to extrapolate out the overhead of passing a state object around to all of them. That's a waste of effort, and frankly makes me think you want to code using an FP paradigm even in imperative languages.
Module-level and thread-level "globals" are fine. You gain nothing (other than some smug ivory tower sense of superiority) by making your functions pure and passing around a global state object to every single method invocation.
If that’s so useful, make your language support the concept of lexical environments instead. Otherwise it’s just manual sunsetting every day of week. Our craft is full of this “let’s pretend we’re in a lisp with good syntax” where half of it is missing, but fine, we’ll simulate it by hand. Dirt and sticks engineering.
(To be clear, I’m just tangentially ranting about the state of things in general, might as well post this under somewhere else.)
You can have it either way, it’s not for you but for people who disagree with what they deem a preference that is the only option when there’s no alternative.
I got into this argument with my former coworkers. Huge legacy codebase. Important information (such as the current tenant of our multi-tenant app) was hidden away in thread-local vars. This made code really hard to understand for newcomers because you just had to know that you'd have to set certain variables before calling certain other functions. Writing tests was also much more difficult and verbose. None of these preconditions were of course documented. We started getting into more trouble once we started using Kotlin coroutines which share threads between each other. You can solve this (by setting the correct coroutine context), but it made the code even harder to understand and more error-prone.
I said we should either abolish the thread-local variables or not use coroutines, but they said "we don't want to pass so many parameters around" and "coroutines are the modern paradigm in Kotlin", so no dice.
You know what helps manage all this complexity and keep the state internally and externally consistent?
Encapsulation. Provide methods for state manipulation that keep the application state in a known good configuration. App level, module level or thread level.
Use your test harness to control this state.
If you take a step back I think you’ll realize it’s six of one, half dozen of the other. Except this way doesn’t require manually passing an object into every function in your codebase.
These methods existed. The problem was that when you added some code somewhere deep down in layers of layers of business code, you never knew whether the code you'd call would need to access that information or whether it had already previously been set.
Hiding state like that is IMHO just a recipe for disaster. Sure, if you just use global state for metrics or something, it may not be a big deal, but to use it for important business-critical code... no, please pass it around, so I can see at a glance (and with help from my compiler) which parts of the code need what kind of information.
I’m having a difficult time understanding the practical difference between watching an internal state object vs an external one. Surely if you can observe one you can just as easily observe the other, no?
Surely if you can mutate a state object and pass it, its state can get mutated equally deep within the codebase no different than a global, no?
What am I missing here? To me this just sounds like a discipline issue rather than a semantic one.
> To me this just sounds like a discipline issue rather than a semantic one.
Using an explicit parameter obviates the need for discipline since you can mechanically trace where the value was set. In contrast, global values can lead to action at a distance via implicit value changes.
For example, if you have two separate functions in the same thread, one can implicitly change a value used by the other if it's thread-local, but you can't do that if the value is passed via a parameter.
> These would be just as traceable in your IDE/debugger.
A debugger can trace a single execution of your program at runtime. It can't statically verify properties of your program.
If you pass state to your functions explicitly instead of looking it up implicitly, even in dynamically typed languages there are linters that can tell you that you've forgot to set some state (and in statically typed languages, it wouldn't even compile).
If your global state contains something that runs in prod but should not run in a testing environment (e.g. a database connection), your global variable based code is now untestable.
Dependency Injection is popular for a very good reason.
Sure. And a million programmers have all screamed out in horror when they realize that their single test passes, but fails when run as part of the whole suite. Test contamination is a hell paved with global variables.
Just need to make sure your module doesn't get too big or unwieldy. I work in a codebase with some "module" C files with a litany of global statics and it's very difficult to understand the possible states it can be in or test it.
I agree that so long as overall complexity is limited these things can be OK. As soon as you're reading and writing a global in multiple locations though I would be extremely, extremely wary.
I did not say you are targeting OP. I meant that you are degrading your parent commenter.
This:
"You gain nothing (other than some smug ivory tower sense of superiority) by making your functions pure and passing around a global state object to every single method invocation."
...is neither productive nor actually true. But I'll save the latter part for your other reply.
I could not initially reply to you. Your comment rubbed me the wrong way, because I had no intention of trying to degrade anyone, and frankly, I was offended. But I thought better of my hasty and emotional response. I would rather take a deep breath, re-focus, re-engage, and be educated in a thoughtful dialog than get into a mud slinging contest. I am always willing to be enlightened.
A tip, in your profile you can set a delay which is a number of minutes before your comments will become visible to other people. Mine is set to 2 right now. This gives you time to edit your comment (helpful for some longer ones) but also to write some garbage response and then think better and delete it before anyone's the wiser.
It's also helpful to give you time to re-read a mostly good, but maybe not polite, response and tone down your response.
Even in a single-threaded environment, passing a struct around containing values allows you to implement dynamic scoping, which in this case means, you can easily call some functions with some overridden values in the global struct, and that code can also pass around modified versions of the struct, and so on, and it is all cleanly handled and scoped properly. This has many and sundry uses. If you just have a plain global variable this is much more difficult.
Although Perl 5 has a nice feature where all global variables can automatically be treated this way by using a "local" keyword. It makes the global variables almost useful, and in my experience, makes people accidentally splattering globals around do a lot less damage than they otherwise would because they don't have to make explicit provision for scoped values. I don't miss many features from Perl but this is on the short list.
And do you always know beyond any reasonable doubt that your code will be single-threaded for all time? Because the moment this changes, you're in for a world of pain.
Wrapping the globals into to a struct context #ifdef MULTI-THREADED and adding this ctx for each call as first call is a matter of minutes. I've done this multiple times.
Much worse is protecting concurrent writes, eg to an object or hash table
If the discussion here is meant to be solely about JavaScript, then I'll happily consider all my comments in this thread to be obsolete, since I don't have particularly strong opinions about that language since I don't use it a lot.
I was under the impression that many people here were discussing the usage of global variables more generally, though.
It seems to me like the final optimization is the only one that is not correct and obviously so.
> The final optimization notices that q is never written to, so we can replace q[0] by its initial value 0.
However the line right above is
*(p+1) = 10;
The type of p is char[] which decays into a pointer and the type of (p+1) is char *. Because it is char * and not char * restrict, any assignment through such a pointer can be assumed to modify any and all data in a program.
Therefore, the assumption that p[0] does not change and can be swapped out is not somehow surprisingly wrong, but simply entirely unfounded.
It is up to the compiler to prove for itself that assignment through pointers does not change data it's interested in from before to after the assignment. (except, of course, in the case of restrict pointers, where the programmer makes this guarantee). This is not very difficult, at least conceptually.
If the assignment was
*p = 10;
Well, we know that *p means p[0], which is within the bounds of the array, and we know that p and q do not overlap. Therefore, this assignment can not modify q.
In the case of the first assignment, we know that *(p+1) means p[1], which is out of bounds for the array. So we do not know whether or not an assignment through *(p+1) modifies q, and we can not assume it does not.
It's not always correct, but that's kinda the tradeoff with undefined behavior. It's your responsibility to know the defined spec and stay within it, not the compiler's to prove you did it correctly in all cases.
>Since q and p point to different local variables, a pointer derived from p cannot alias q[0], and hence we know that this write cannot affect the value stored at q[0].
Pointer math outside the array bounds isn't defined, so it can be treated as anything, including "cannot possibly do X, so we can optimize it out". A compiler that preferred to spend memory to improve correctness (e.g. Valgrind) might add padding between variables to prevent this kind of small-OOB action, which would make the output change again, which is why this is undefined and you can't rely on it. A compiler that was feeling moody could put that array at the end of your page of addressable memory, and cause a segfault when you try to write to it.
In the actual code *(p+1) is only ever accessed if it has been verified to be the same address as q[0], which is an object which we know exists and we have full access to.
The problem is that pointers of the same type to the same address but originating from pointer arithmetic on the pointers obtained by dereferencing different objects have different semantics.
In other words, the following is not semantically a NOP.
I believe this goes against the principle of least astonishment (I find it quite astonishing, personally). I believe, even if the standard not does not guarantee the expected behavior, that implementations should still guarantee it. And they can do the optimizations they want in other ways, as I have outlined.
While I generally agree and I much prefer my langs with minimal undefined behavior, this gets solidly into Hyrum's Law territory, and it means practically every significant optimization could break something therefore it cannot be done. C just doesn't have rich enough semantics to allow much else.
You can find some languages that try to adhere to that, but C is extremely clearly not part of that group. It never has been, and it never will be.
The problem with C is not necessarily the idea of undefined behavior. It's the culture around undefined behavior. There is no culture of avoiding UB in a programming language community that glorifies UB as necessary. That's the paradox.
It's like you're supposed to cross an enemies' minefield, with the enemy laying ever more mines in your path and yet you're supposed to walk through the minefield and hope for the best.
You'd have thought that C programmers have the best tools and techniques to avoid UB by now, so that UB would be a non-issue, a relic of the past that people remember suffering from in the 90s.
> The type of p is char[] which decays into a pointer and the type of (p+1) is char . Because it is char and not char * restrict, any assignment through such a pointer can be assumed to modify any and all data in a program.
Out of bounds access is classic UB. The compiler is allowed to assume that you never access an array out of bounds. Since you never access p out of bounds, the effects of any write to p will not affect any "object" that is not p.
>Therefore, the assumption that p[0] does not change and can be swapped out is not somehow surprisingly wrong, but simply entirely unfounded.
I hope you notice your small mistake. It's print(q[0]), not print(p[0]), there is no "p" in the print. It's q[0] and q[0] is not p. The write to p did not affect q, because that would be UB.
>It is up to the compiler to prove for itself that assignment through pointers does not change data it's interested in from before to after the assignment. (except, of course, in the case of restrict pointers, where the programmer makes this guarantee). This is not very difficult, at least conceptually.
The code accesses q[0], not iq. You might say that if you did print(*iq), that the compiler should definitively know that iq and ip are the same and therefore a write to ip would also be a write to iq, but the code uses q[0], which is a memory address that is not proven to be identical to ip.
Then again, what you're hoping for here is that the compiler fixes the UB for you, which is not the point of UB. You're supposed to rely on your godly C programmer instincts to never write code with UB in the first place.
This statement "any assignment through such a pointer can be assumed to modify any and all data in a program." is incorrect. You are not allowed to use a pointer to one object to modify another. At least this is what the C standard (somewhat imprecisely) implies and compilers generally exploit for optimization.
Most people agree that the right solution is that the integer-round trips can not simply be assumed to be a no-op and has the effect that the recreated pointer can be used to point to the other object (as proposed by us in TS 6010). (So in this sense you are right). The LLVM maintainer stated recently that they are going to fix this.
> You are not allowed to use a pointer to one object to modify another.
In this context, you means the author of a program written in Standard C. Implementations can choose to allow this (and in practice all do, because of the nature of pointers). And obviously, a code transformer for a particular implementation of C would only need to generate code valid for that implementation of C, not valid Standard C code.
True, an implementation could choose to allow this. But in is a bit questionable to say they do in practice, because most will miscompile your code in this case in various circumstances.
This is basically how you would do it in C as well.
I haven't had as much time as I would have liked to play around with Zig. But, it seems like Zig is like C, but with much stronger static checks, much less UB, lots of useful syntactic sugar, an anti-global agenda (explicitly passed allocator in Zig vs strtok in C) etc. And, truthfully, I really like this.
If we can move from a C/C++/Java world to a Zig/Rust/Go world, I would view that as an absolute win.
And the anytype stuff seems pretty close to C++ templates. I wonder how different comptime really is as compared to templates + constexpr?
As for C++, my experience is that templates can help produce some things in a type safe way that you later tend to regret and understand you should have never produced in the first place :-)
Just created some custom closure producing code that takes a pointer to any function along with arguments. The arguments get stored in custom allocations and can be applied to the function at any later point. Creating the template code is quite painful and probably none of my teammates will be able to maintain it. On the flipside, closures like this might be the right API for the use case.
Comptime is pretty much C++ templates without the need for a completely separate language for metaprogramming, which is the way it should be IMO.
(The downside is that there's no template parameter inference, but I'm pretty sure that this is one of those things that Zig would avoid by design anyway.)
The article compares the Unix/C philosophy with the Lisp philosophy and concludes that Lisp is "better" (it is The Right Thing), but that Unix/C will probably end up dominating. Obviously, he was right.
The article is pretty famous around the suckless, minimalist, crowd. But I think everyone who reads it takes a slightly wrong lesson from that article. The point isn't that Unix/C won because it was simple. The point is that Unix/C was simple enough to use, simple enough to implement on most available hardware and that it did everything you needed it to do.
The minimalist people have turned software simplicity into the modern version of The Right Thing, and are surprised when they found out that the "worse" (more complex) thing wins. But The Right Thing will never win. The Good Enough always wins.
Is Linux better than *BSD? Well *BSDs are more simple, easier to understand and reason about etc. But, at least OpenBSD, doesn't have mount --bind, so I can't use it. Case closed. And I think that's the case for most people. Linux has thing X which I need so I will use Linux, even if ideologically *BSD is imo better.
Is D-Bus The Right Thing? No; but it is The Good Enough. polkit, selinux, ACLs, and so on.
The most recent example I can think of is Hyprland. It is basically the only viable Wayland window manager (aside from sway). Not because it is the best designed or the simplest, or anything like that. But simply because it is the only one Good Enough to use.
SystemV init was not good enough for complex service management. systemd is. systemd simply does everything you need it to do well enough. So systemd wins. Simple as. It will only be replaced when it is no longer Good Enough.
Exactly, for some values of "good enough". (Unfortunately IMNSHO) Systemd cannot be toppled by all-out assault. It must be subverted. Looking forward to reimplementations and/or shims which provide the Systemd API but are reimplementations.
Something like Wine, where in the analogy systemd is the Microsoft Win32 API and we want to be able to run systemd "enabled" software on something else. Or at least re-compile it.
Wine also started with an incredible amount of stubs which did nothing, but that was often enough for many programs to work anyway.
> Looking forward to reimplementations and/or shims which provide the Systemd API but are reimplementations.
We've seen the launch of GNU's systemd equivalent, which seems to introduce more complexity to the simplest levels and is more difficult for users to understand and configure. Given that every service file is its own erlang program/DSL/whatever, understanding erlang is critical to being able to write or reason about service files.
In essence, it seems to be trying to re-implement what systemd does, but badly.
Reimplementing systemd's API but using other discrete components would also be a bad idea. The biggest reason systemd took off is that it's all the tools a system needs, built in a way that they work well together and can be interacted with and configured consistently. Providing systemd-compatible replacement shim APIs would mean trying to tie a dozen (or more) discrete OSS projects, each with their own philosophies, developers, licenses, goals, etc. together, generating their config files consistently, and trying to mush all that together in a way that doesn't make things vastly more complex than what systemd already provides.
In short: the reason systemd is successful is that it got rid of a giant mess of unrelated or overlapping services and replaced it with something simple, coherent, and functional, built out of components that you can (in most cases) choose to use or not. Most people who hate systemd seem to hate it for largely ideological reasons ("it's too big!") and are more interested in solving those ideological problems at the expense of practical solutions to real user issues. I've yet to see someone argue against systemd because it's bad for end users; only because they don't like the way that it provides benefits for users.
It could mean that, but it could start much simpler. Instead of trying to implement the whole API surface and behaviour, focus on a single consumer i.e. a single program which today depends on systemd.
Shim out enough for it to at least run in some capacity on something which does not have systemd.
Do you have examples of what you suggest, 'a single program which today depends on systemd' which would make sense to decouple from systemd?
The amount of work that a project would require over the long-term would be pretty substantial, so I assume that there are a lot of these things which you would suggest fixing over time to be rid of systemd, but I'm not certain what the end benefit of this work would be. Interested to hear more.
Gentoo has a little list¹, along with suggestions for modifications. Instead of modifying the programs, one could experiment with a shim library and leave the program code unmodified to see if that's a viable path towards portability. The end benefit would be portability to systems lacking systemd proper.
(Now, the importance of portability is a separate question. If you want systemd to "eat the world" it's even a negative value.)
This kills me when, of course all these apps were already portable for decades and only just recently all now need a "viable path towards portability".
You'll find that there are very few such programs. This shouldn't be all that surprising, because most programs do not care what process started them or where stdout is logged.
The most prominent dependency on systemd components is Gnome on logind, which already has a shim.
Yeah. Exactly. It's also how Wayland and Pipewire managed to win: by being backwards compatible with X and Pulseaudio, while also overcoming their shortcomings.
The problem is that if the only complaint with systemd is that it's too complex, any backwards compatible alternative will necessarily be as complex as systemd + the new thing (at least in terms of complexity of interface).
If there are actual technical deficiencies of systemd, then sure, maybe such a backwards compatible alternative might be in order.
Also, everything expands in time. Wine may have started out with many stubs, but now we're at the point of implementing Windows-y APIs in the kernel with the sole goal of improving wine performance (NTSYNC).
> The problem is that if the only complaint with systemd is that it's too complex
As I understand it, that isn't the only, nor even the main, complaint with systemd. On the contrary, the main complaint with systemd is that, like strangler vine (kudzu?), it's spreading everywhere, taking over functions that it originally had nothing to do with, and forcing ever more unrelated things to adapt to systemd -- thereby making them much less portable away from systemd; to something that doesn't use it.
It's like the old debate about "BTW, you should call it GNU/Linux, because..." is sooo yesterday: Nowadays it's systemd/Linux -- and well on its way to becoming systemd-OS, with the kernel a mere afterthought, soon to be if not replaced at least easily replacable. Sure, that may not bother you or most systemd fans, but one can't help wonder: Do you even realise that this is what you're condoning, or even actively advocating? Maybe it would bother you, if you realised it.
(Also, the whole "strangler vine" thing, when deliberately applied, used to have a name: "Embrace, Extend, Extinguish". Now, I can't really swear that it is being deliberately applied here... But do we really dare to blithely assume it's just a coincidence that the creator of systemd so naturally found a home at the company the expression was coined for?)
Subversion is key. Remember when "performance" was the key reason for moving to systemd (scripts are sooo slow). To subvert, you need some lever and the chink in the armor to break in.
D-Bus is good enough for all sorts of things. But you absolutely don't need a behemoth like systemd to use D-Bus. A D-Bus communication option can be added to existing programs and utilities, or new ones could be fashioned for which it is a central mode of operation; and still there would have been no an ever-growing pile of centrally-maintained and inter-dependent artifacts.
As for SystemV... as I mentioned in another comment - in hindsight, that was not the issue. There were and are SystemV alternatives [1]. One can even write one that uses D-Bus in an opt-in fashion, to foster other facilities' use of it.
The init system excuse is very much like the stone in the fable of the stone soup: https://en.wikipedia.org/wiki/Stone_Soup - it's what gets the vagabond's foot in the door, and is the first ingredient for the soup. But actually, the stone carried no significance, and the resulting soup is pretty much the same thing as without the stone part.
and one more thing about that: For PC users and "vanilla" machine setups - sysvinit, weirdly, still works fine. Startup is quick, and developers don't live in pain making it work. Of course it is kind of lame design-wise, but it's not even that we just _had_ to replace it - we've still not reached even that point.
> The most recent example I can think of is Hyprland. It is basically the only viable Wayland window manager (aside from sway). Not because it is the best designed or the simplest, or anything like that. But simply because it is the only one Good Enough to use.
I'd wager the vast majority of (non-embedded / single-purpose) Wayland deployments are gnome and KDE.
River is very good but a niche in a niche. Telling that ive been using hyprland though.
Part of the problem with all of them is less the wm and more knowing what parts you need. Not so much bars as portals. I'm vaguely surprised hyprland doesn't have a "good enough" batteries included config and documentation.
Hyprland does have an extremely useful "Useful Utilities"[1] section on the wiki.
You simply install everything from the "Must-Have" page, then go through the other pages and install everything you want and by the end you have a complete and functional desktop. For every component, there is a good description of how to use it in Hyprland.
I simply followed that guide, remapped some shortcuts, copied some guy's waybar config from GitHub, and, well, did little else, and had a perfectly functional Hyprland desktop that I used virtually unchanged for ~six months.
Only recently got into ricing it more, and only to the extent that now all my components (app launcher, notification manager, waybar etc.) use the same color theme.
Are you sure that LLMs, because of their probabilistic nature, would not bias against certain edge cases. Sure, LLMs can be used to great effect to write many tests for normal usage patterns, which is valuable for sure. But I'd still prefer my edge cases handled by humans where possible.
I'm not sure if LLMs would do better or worse at edge cases, but I agree humans WOULD need to study the edge case tests, like you said. Very good point. Interestingly though LLMs might help identify more edge cases us humans didn't see.
The advantage of democracy is that the propaganda game gets played every few years and current elites can lose. Under a system of freedom of speech, there is very little stopping a decently (but not massively) funded rag-tag group of competent individuals from running a more efficient propaganda campaign than the powers-that-be (think of Dominic Cummings' Leave campaign in the UK for the perfect example).
This is the best system we have found to establish the impermanence of the elite class. Because this is the real beauty of what we in the west call democracy: not the absence of an elite class, for there is no such system, but it's impermanence.
And while that is all well and good within a country, the argument is that it would be unwise to allow a foreign hostile power a seat at our propaganda game. Especially one which does not reciprocate this permission.
This is a thoughtful reply. But, if it's just propaganda games played by the elites, I suppose another way to ensure informed outcomes might be literacy tests. Or property ownership.
I guess more than anything I'm just surprised that it's the "threat to democracy" crowd that would be taking such a cynical view of democracy. They're admitting that Trump's propaganda was just better than theirs. Which is, in some ways, hilarious.
This looks very cool to me, and I might start using it for my projects. Anything I can use to get away from make and, especially, CMake, is a massive win in my book.
I've used mk in the past, but asking users to install Plan 9 from User Space is a big ask :). A rust program they can just build with cargo feels a lot better.
Questions:
* is there a way to pass arguments, such as CC etc. to werk. will config or let skip past the first assignment if there is an environment variable defined with the same name? or how do i get my config to werk without editing the werkfile
* does the depfile rule get called if the depfile already exists? if so, what are the performance implications of this? if not, what happens when I include a new header in a file and the dependencies change?
I hate looking at myself. I hate seeing myself and my dumb face.
I can spend hours upon hours on a Discord call discussion whatever, working on a collaborative project etc. But every second I have to spend on a video call looking at myself drains my soul.
I get it if it's some investor meeting or whatever, but if it's a call for a team where we all know each other, what's the point?
I show myself to subtly remind the team that I'm a real person. Most of them see each other in-person each day, while I'm remote. But it'd be nice if they didn't see such a zoomed-in view of my face.
I relate completely. I usually solve it by sticking a post-it note on the screen so it covers my face in the call. It makes the calls feel a lot more natural and less awkward.
We use Google Meet and this is one of the big things I miss from Zoom, there appears to be no way to turn the self view off. On every call I have to turn it off manually.
I find that a lot of people who say they turn their camera off say they do it because they don't like watching themselves on screen, and don't realise this is optional. Personally I think all the video call systems should turn it off by default, I've never understood what purpose seeing yourself all the time is supposed to have. It would be like having a meeting room full of mirrors.
Cameras unintentionally left open would be much more typical if this was not the default. I agree that it still should be possible to have it off in every meeting after changing some setting, though.
True. Meet always shows you your camera view before you start the call, it would be nice to have two buttons when joining, one with your camera on and one off.
Either way, it would be nice if they put a bit more effort into these things. Video calling could be much nicer for a lot of people with a little bit of effort.
I use Librewolf (based on Firefox), but about once a year I open Chrome for some shitty website that only works on Chrome. And I use Chrome for a few minutes.
It shocks me every time just how fast Chrome is. It is legitimately a superb piece of software. Going to Librewolf after feels like going back ten years in hardware.
Can we please start spending some money to make Firefox better? Instead of whatever Mozilla is currently doing?
Firefox Quantum was great. But why stop? Just keep doing that! It's the only thing you should be doing!
uBlock Origin on both (wouldn't browse the web without it). Vimium and Dark Reader on Librewolf. But turning Dark Reader off does speed it up by _a lot_.
Chrome still seems faster but now they're both playing in the same league.
Most new features of C++ are introduced to fix problems created by previously new features added to C++.
reply